Could GPT and Bard actually help blue-collar fortunes?
Global labor markets will be just as directly affected by what GPT-4 and other LLMs cannot do.
All of the current analysis of the economic impact of LLMs like ChatGPT and Bard on labor markets seems to be concentrating on first order effects — jobs that the AI algorithms can themselves replace or unburden. The consensus seems to be nicely coalescing around the expectation that jobs for paralegals, copywriters, translators, telemarketers, artists and journalists will shrink in numbers, hours or wages. The most recent paper by OpenAI itself (with Cornell) repeats as much. They added survey researchers, PR agents, tax consultants, and some surprising entries including blockchain engineers and financial analysts.
That rule may indeed extend all the way up to some of modern economy’s highest earning workers. Turns out the eventual results of applying all the specialized skill of Asset Managers on Wall Street get meticulously recorded in transactions at the world’s largest and oldest financial institutions. Morgan Stanley has been training a custom GPT4 instance on all its decades-worth of financial know-how. Jeff McMillan, Head of Analytics, Data & Innovation bills it as:
having our Chief Investment Strategist, Chief Global Economist, Global Equities Strategist, and every other analyst around the globe on call
As an epiphenomenon though, there is a good chance that the very best talent in these professions might command premium wages as middling work by AI floods the zone, to borrow a phrase from Steve Bannon (who might likely drive much of the flooding).
But there has been little analysis in global English media (as far as yesterday’s traditional search engines can say) of a critical second-order effect — hiring decisions that stem from what LLMs cannot do, i.e. physical work. It seems entirely plausible and likely to me that LLMs that match or exceed GPT-4 in versatility might pull in armies of blue-collar workers into what used to be ‘knowledge’ work. How? It requires a little imagination, but bear with me on one far-fetched example. It’s to put in perspective how likely the shift in shifts might be in other domains.
Imagine a chemistry lab at an eminent university. Right now, a mid-sized lab employs about a dozen graduate students and a handful of administrative staff. All researchers at such a lab need to have a college degree so they understand every intricate detail of the experiment, even if for months on end they might be engaged in rote routines.
In many science labs, not all, some parts of the physical work required in lab work, not all, could be delegated to people with a high school education if they had an AI supervisor who can lay out a step-by-step procedure and guide them through their day’s work. At the beginning of a project, the professor would need only to ask GPT to generate the detailed sequential instructions with reference to all the equipment involved, with the last experiment as part of its ‘context window’. After every step, the worker would observe and record the results as instructed. What is more, with an AI model that has had additional training in this lab’s specific area of work, with reams of information on practical methods, limitations, typical pitfalls etc., the workers would have a supervisor they can consult at any time when stuck.
“Hey, Sydney, you said to wait about 15-20 seconds and then report whether the solution turned blue or yellow1. But it turned pink!”
Well acquainted with past notes of burned-out PhD students in a hurry, Sydney will go: “Are you sure you washed that test-tube thoroughly after the last round of tests? Even a tiny amount of Hydrogen Peroxide residue could be interfering with your experiment in the manner you describe.”
As I said, it sounds far-fetched for the example I picked. But if you’ve spent time with GPT, the technology is definitely already here to generate instructions and real-time consultation on a number of physical tasks that people with a lot of education perform today in all kinds of jobs. The reports you’ve read of LLMs ‘hallucinating’ or making factual errors are mostly from when human testers push it outside of its admittedly vast general knowledge field. When trained on narrow domains2, they’re well capable of the wonderous scenario above. Plus, as you know to expect by now, with this technology, to read economic auguries accurately, we have to update our priors at least twice a day. Just yesterday, GPT subsumed3 a plugin that radically levels up its powers in ways particularly relevant to this line of research.
By the way, what do the students do after the experiment is concluded? They write! Usually academic papers that conform to well-worn conventional templates, again a job that the lab lead or the now much-smaller team of grad students can automate and then edit. That’s the first order effect again active at the other end of what universities and scientific journals see as a production process.
At this point you might be thinking, how big a labor cohort might this be? Maybe the posse of overqualified sous-chefs in whatever haute cuisine TV show you like. Or a film studio. Generally any industry that exploits college graduates in a normalized rite of passage? That’s not an unreasonable guess. Those underpaid educated workers grumble and attrition is high, with associated costs of disruption and hiring. Middle managers could substitute at least a portion of these ‘apprentices’ or interns with high-school graduates grateful for wages slightly higher than they got in less glamorous industries.
But I don’t primarily mean specialized jobs that for some part involve manual menial tasks in the sense of physical effort. I mean any portion of a white-collar job that requires hands and fingers, back to the Latin root of ‘manual’. Think healthcare professionals. Or museum staff. If you break down any medium-salaried job into disparate sequential tasks, you might find that the distribution of ‘handiwork’ varies a lot across industries and within firms. In fact, the OpenAI paper cited above uses just such a database though much lower resolution and coverage than I’d like. And as I said we now want to look at the inverse set of tasks.
So here are Presbyopia’s suggested research questions:
What proportion of which white-collar jobs involves physical tasks?
Maintaining on-par productivity, what wages would humans merit if such tasks were reorganized for an AI plus a team of humans?
How many such new jobs might a given economy generate?
The first is best answered with surveys, staggered into country clusters by wage levels and size of the service economy. The other questions need a set of simulations.
Regardless of how good the chatbots are at conversation, I believe Android robots with humanlike manual dexterity are decades away, not to mention with cognition to match (loosely called ‘AGI’), which might take centuries. You might be more sanguine, but even the most optimistic date for reliable domestic-worker androids would be about 2060. In the interim between 2023 and then, most fiddly physical tasks would still fall to members of our species.
But here’s the thing about mastery over language — another regular drumbeat here at Elite Scotoma — that’s the one thing that allowed what would otherwise have been just another primate with opposable thumbs to cooperate on a large scale, with every individual assigned specialized tasks without much understanding of how every other task done by millions of others fits together to achieve a grand objective in some vast global system. That works because at every level, someone records instructions in a standard medium and communicates them. Education ensures that across generations, others understand and follow them.
That is why AI doesn’t need to master cognition and manual dexterity before radically transforming human societies and division of labor. Acquiring language at a sufficient level of proficiency goes a really long way.
The upside is that this time, unlike with prior iterations of technological revolutions, machines will largely not replace workers with the least education, options and bargaining power. In fact, if the flipside above pans out, LLMs could be a great boon for mature economies like the USA where fewer and fewer men want to attend college but do want jobs that pay a living wage. In an economic contrecoup without precedent, many could now go work right alongside the coastal Ivy League yuppies in their swanky metro workplaces. Or indeed, replicate that same work in the heartland, as some have called for. It bodes well also for European countries where the arithmetic of pensions has fallen off-kilter.
Not the least, it means a unique now-or-never opportunity for countries that cannot offer affordable world class higher education to all its young people, let alone jobs to match. The above dynamic means India, Nigeria or Brazil can make a foray into industries that would have otherwise required expensive exceptionally educated globe-worthy employees of the kind they have been exporting to their own great cost. Now, much of the same work could be done with a much smaller bunch of these educated people creating and updating interactive, real-time instruction manuals, and legions of workers joining industries with higher productivity making products with higher margins. A good thing too that the world’s largest economies and trade partners now no longer frown upon Industrial Policy, eh?
Of course, a lot of that requires an all-too-rare combination of foresight and multi-agency coordination in governments. Barring which, someone, somewhere is soon going to discover that such a firm could well be a multinational, with the AI instructions generated on the other side of the world.
At this point, let me be clear. I’m no techno-optimist. The above research questions are not to estimate what I hope will happen in an year, but what I expect is likely as the relevant incentives align. Even a partial sway towards that track takes us into uncharted economic and political territory. Which is why the research must happen now.
I admit the part of me that spends exactly half my waking hours worrying about inequality enjoys flights of wishful thinking about this dynamic possibly recalibrating wage gaps within firms and between countries. Then I quickly slide back to worrying over what we are going to do about education.
Post Date
26 Mar 2023: The image caption above nods at a related theme that has seen far more coverage — the people whose work the AI feeds on. That side itself has many facets. As above, AI models learn from user content on Web 2.0 platforms, which includes the work of professionals using the platforms in a professional capacity in good faith, without their consent. Other human work they require is in sanitizing the responses of chat algorithms to minimize the bad press that inevitably follows each model’s release. Below is a DW report on workers in India who create the training datasets for a different class of AI. A young woman in Kerala noticeably beams as she speaks of the independence the job has afforded her.
If you’re mentally modeling the research questions above, take a few moments to step back and visualize the global churn of the workforce underway at the moment. AI has consumed trillions of person-hours all around the world, from American bloggers, Wikipedia editors, photographers and creators early in the internet’s life to Roshni’s workdays today. While her employer’s business model is decidedly viable only short-term, the AI she is training will soon upend the employment prospects of her age peers across her country and every other country. How will governments assure that same dignity of decent wage to citizens with a median education five years from now?
[ There are non-AI research questions in the pipeline for Presbyopia, I promise. It’s just that right now is the moment to talk about LLMs and things are moving incredibly fast. The curious types need to apply for this PhD topic and send journalistic pitches six months ago!
And do check back at this link soon for the Post-Date section. ]
So sorry. I’m a chemical engineer by old training, but decided to go with a movie cliché to keep out distracting names of reagents I remember with a shudder.
For details look up ‘fine-tuning’ or ‘in-context learning’.
This made me think of those microbes like amoebas that surround particles with their own bodies and just make them a part of themselves. Couldn’t remember the technical term. Naturally, instead of Google, I asked the new sidebar Bing. Phagocytosis. While you go about your day, LLMs are turning into a sort of substrate technology for our civilization, which interfaces with every other algorithm, platform and soon hardware.