The key to AI jobs might be very human.
So we might want to think about how to make hiring for AI jobs a little better for humans.
The issue
We need to talk about how humans could survive and thrive despite AI job loss. It’s more than being “AI literate” and “learning how to use ChatGPT better,” yet potentially much easier than a massive retrain of the entire working world.
Well, if we let it be…
Explain.
You may have heard that AI is coming to take 50 percent of everyone’s jobs, move into their houses, and make them sleep on the cold concrete floors of their garages.
If it makes you feel better, I’m not sure about that level of direness in the near term, and some recent job loss blamed on AI may be a hair overstated. But AI tools, as currently built, are very good at doing instantaneously several things that humans can be very good at in a not-quite-so-instantaneous way—processing data, making calculations, finding things on the internet. For business leaders, AI doing complex things fast raises the question of why they need some of the humans they’re paying when the computer—which doesn’t need inefficient costs like “benefits” or “breaks”—can do the same thing, faster.
That hasn’t necessarily had the expected results for executives who were first to welcome our new robot overlords. But it’s the conversation we seem to keep having. God help us all.
The technology will grow to do more work better and more predictably than it does now. Executives aren’t going to stop seizing upon every new opportunity to cut overhead through the bone. Workers are going to be asked to gain AI skills in their jobs or risk losing them. They may lose them anyway and need to find another job where they will be asked to gain and use AI skills.
The problem is that… it’s not exactly clear what those “AI skills” are, based on the discourse about them. If you ask someone in politics or tech how workers are supposed to survive the age of AI—and perhaps even thrive—a lot of the answers are like this:
This version of “AI literacy” isn’t a terribly high bar, and quite frankly, it’s the type of thing you can learn trying to coach ChatGPT to draw pictures of cats in go-karts drunkenly doing doughnuts in a drugstore parking lot.1
For example:
It’s hard to see that basic kind of “AI literacy” as being a useful job skill on its own if it’s just learning how to prompt write. This is why I found this report, published last week by Lightcast, incredibly helpful.
Lightcast analyzed job postings to decode which skills AI jobs actually require. Only two of the top-10 were actual tech skills that related to AI. The rest were very human skills—like leadership, management, and customer service—and a few that some folks might assume AI is better at than humans—communications, research, writing skills, operations, and problem solving.
Yes, knowing how to write algorithms and developing machine learning is in high demand. If you’re looking for a technical skill to get a job with in the next couple years, sure, those are great ones to nurture.
But as Lightcast put it:
[W]hen eight of the top ten skills for AI-enabled roles are human capabilities, programs that focus exclusively on technical AI training are missing the mark.
The most in-demand “AI skills,” then, are actually a suite of very, very human skills needed to successfully integrate AI into work, not just be literate in it. That hints that a bucket of skills—which aren’t necessarily all that technical and specific to an occupation—could create worker access to all kinds of jobs, especially if AI lowers barriers to entry to jobs that have awfully high and expensive ones now.
Investing in the human skills that endure also seems like a safer bet than building trainings for the AI jobs of today that might not be here a year from now. This feels like the 43rd The Machines Are Coming to End Your Career Trend of my working life.2 “Safe” jobs from previous technological jobpocalypses—like coding—aren’t terribly safe now, and many of those jobs never manifested for workers to whom they were promised as a lifeline.
That’s part of why I find the “You should learn how to write better prompts” conversation so grating at times—prompts built for the current form of ChatGPT or whatever might not have that much value too long from now. And you can fine tune your prompting skills in just a few minutes of trying to get Gemini to draw a broke dog in the universe of the Sega Dreamcast classic Jet Set Radio.
For example:
It’s very easy to get caught up thinking through the possibilities of this sort of future. Yes, workers could have to move around a lot, but they could do jobs that were previously walled off by exclusionary education and training systems. Yes, some retraining might be needed, but if barriers to entry are gone, workers could hop from one outmoded job to a new job almost immediately based on a suite of skills that carries them along a current of careers.
That would be a nice conversation to have—on a different planet than this one.
On this planet, if you ask Tech CEOs what workers need to do to get jobs in the future, they’re going to tell you “Write better prompts.” Probably because that’s only what they (wrongly) think most workers are capable of.
On this planet, political leaders only think of work in terms of barriers to entry, and they seem apt to defer to executives who, as many a workforce provider can tell you, probably don’t know their actual skilling needs or don’t have as much say in actual hiring as they think. For example:
Speaking to an audience of mostly private sector executives, [Deputy Labor Secretary Keith Sonderling] said: "You need to tell us where those jobs are going."3
He said the government could use that information to "fund the education institutions on those skills, so you have those skills you need and the workers are ready to go."
And on this planet, we have busted forms of hiring not built for people being able to hop from sector to sector if they have one universal bucket of skills. For many employers, hiring is more about shrinking The Pile of applicants than finding people who can do the job well, and Boxes Must Be Checked. That type of hiring is not compatible with the idea of, say, a former copy writer who is good at communicating and thinking in systems being the perfect fit for a job in health tech that needs smart AI integration.
So what do we do about it?
So… wanna make this planet better?
I don’t want to read too much into the Lightcast report, but it’s hard not to read it and think we really, really need to invest in skills-first hiring, or hiring based on a person’s skillset, not whether they touched arbitrary bases along the way that “show” they acquired those skills. Not as many jobs are as bespoke and special as employers (and policymakers) treat them at the moment. If squishier skills like leadership turn out to be key to the post-AI jobs market, we need more systems and approaches that fairly assess them during the hiring process. And we need less (AI-driven) systems that filter out talented people for not using the right keywords in a resume, or ones that treat tossed-together questionnaires as deeply scientific measures on what someone can do.
Separately, human workers seem a whole lot more valuable to this AI integration work than many business leaders probably think. If human skills are really valuable to this work, and the human working population is shrinking—another thing Silicon Valley seems to think a lot about—basic economics suggest their wages and treatment should improve, not decline. Especially if they’re likely to switch jobs quite a bit over the coming years to keep integrating new tech.
All that said, it’s OK if you feel like the future is scary-looking because of the range of conceivable bad outcomes. This is a hard set of questions and an imperfect world to answer them in. But I also think there are signs from the job market that we need to think of better ways of putting together those workforce answers—even if the people at the levers of the world haven’t gotten the message yet.
Plus, on the upside, here’s what you get if you type the prompt “bunny rave” into Veo.
Card subject to change.
I’m planning to take a Tuesday off in the next couple of weeks, when I’ll re-up one of my favorite (and suddenly very relevant) pieces. In the meantime, I’m monitoring a number of live things that might have a couple surprise posts sneak into your inbox. And I’ll be back Friday to talk apprenticeship and whatever workforce funds wriggle out.
JOBS THAT WORK strongly condemns any instances of cats drinking and driving. The above images are included only for very important illustrative purposes, and all go-karting cats should have the proper training and licensing and wear appropriate kitty protective equipment.
In undergrad, I was once asked very gravely, “What are you going to do about blogs?” I think about that more often than you might suppose.
To be a little fairer to Keith, the outdated legal structures of the federal workforce system only think in a narrow version of “jobs” that you can access only by filling a container with a very specific amount of technical skills. Also, I can tell you what the congressional oversight would look like if you started a grant program training people in “leadership” and “communication” to get good jobs in AI.