
In 1997, a pianist at the University of Oregon sat down and played three short pieces, each composed in the style of Bach. One was genuine Bach. One was written by a music professor named Steve Larson, who had spent his career studying the composer. And one was written by a computer programme called EMI, built by the researcher David Cope to analyse musical structure and generate new compositions in existing styles.
The audience was asked to identify which was which. They were confident. And they were wrong about everything. They picked the computer’s composition as real Bach. They picked Bach as the work of Larson. And they picked Larson, a man who had devoted decades to understanding exactly this music, as the computer. Everyone shifted down one slot by the machine.
“That people could be duped by a computer programme,” Larson told the New York Times, “was very disconcerting.”
That was 1997, running on hardware less powerful than a modern dishwasher. Today’s models don’t imitate Bach. They understand why Bach works. So if a programme can fool an educated audience into thinking it is one of the greatest composers who ever lived, the rest of us might reasonably wonder: what exactly is left for us?
The burning question
I spend a fair amount of time at lunches, conferences, and dinners with family offices, investors, and business owners. The conversations cover markets, technology, geopolitics, the usual. But roughly fifteen minutes in, wherever we are, whoever is at the table, someone asks the question. Not always in these words, but always the same question: what are we all going to do?
They don’t only mean jobs. They mean purpose, identity, what to tell their children. Every previous technology wave produced sceptics. The dotcom wave had them. Mobile had them. Cloud had them. This wave produces something different. Call it vertigo: the disorientation of people sophisticated enough to understand what they’re looking at and honest enough to admit they don’t know where it leads.
The media, for its part, overestimates AI’s impact in the short run (blaming youth unemployment on automation when the actual culprits are minimum wage increases and employer tax hikes) and underestimates it in the long run (treating this as another technology cycle rather than the most significant shift in what human labour means since the industrial revolution).
Both errors leave the real question unanswered.
The two-hundred-year engine
The question has a two-hundred-year-old answer. The answer has a condition attached. Both matter.
The answer first. Sixty per cent of employment in 2018 was in occupations that did not exist in 1940. Agriculture employed 40% of the American workforce in 1900; it employs 2% today. The workers who left the farms did not sit idle. They became teachers, nurses, software engineers, personal trainers, user experience designers, and a hundred other things nobody could have named when the tractors arrived. Every automation wave has created new categories of human desire, and those desires created new work. The pattern has held for two centuries.
Now the condition. The engine runs on purchasing power. Without it, desire is yearning, and yearning doesn’t employ anyone. The Engels’ Pause, from roughly 1790 to 1840, saw productivity soar while working-class wages stagnated for fifty years. The engine did eventually work. “Eventually” took two generations. That is a policy challenge, not a technology problem. Societies have historically risen to it. The discomfort is in the word “eventually.”
And so the honest question is not whether there will be new work. There always has been. The question is whether it comes fast enough, and whether the gains from AI circulate broadly enough to fuel the next wave of demand.
Look around. The world is not finished. McKinsey estimates a $106 trillion global infrastructure gap by 2040: roads, bridges, power grids, schools, hospitals. Africa’s Great Green Wall needs $33 billion to restore 100 million hectares of degraded land. We haven’t started on space in any serious commercial sense. The list of things worth doing is, for practical purposes, infinite. The constraint has never been imagination. It has been the capital, the organisation, and the will to do it.
Three things that grow
Which brings us to the question underneath the question. If AI handles the execution, what will humans actually contribute?
Three things. And each grows as AI becomes more capable.
Meaning: determining what matters. What is right, what is beautiful, what is worth pursuing, what story we are living inside.
Connection: being there for each other. The human IS the product. When a hundred thousand people go to a stadium to watch Coldplay, or three hundred pack a small club where you can feel the bass in your chest, what they are paying for is the shared experience of being human together. The same principle runs through teaching, nursing, selling, coaching, managing, parenting: the presence of another person who chose to show up.
Commitment: putting yourself on the line. Skin in the game. The surgeon whose career is at stake, the founder who bets their savings, the builder who guarantees the work. AI can be the agent. Only a human can be the principal, the one who bears personal, irreversible consequences when things go wrong.
These are not three sectors of the economy or three job categories. They are three irreducible qualities of human activity, present in every role from the ward nurse to the chief executive, that AI cannot supply because they require a being that cares, that has lived, that will die. As AI takes over more of what can be executed, these three qualities absorb a growing share of what is valued.
Determining what matters
AI can optimise brilliantly within a defined objective. Give it a click-through rate to maximise and it will iterate faster than any human team. Karpathy’s AutoResearch automates the experimental loop at speeds no researcher can match. Anything that can be brute-forced towards a measurable target, a machine will handle better than we can.
But objectives live inside objectives, all the way up. And somewhere at the top of that stack, someone has to decide what we are actually trying to achieve. Not which option to pick from a menu (machines handle that), but what game we’re playing in the first place.
Frank Knight drew the line in 1921. Risk is quantifiable; uncertainty is not. “Profit arises out of the sheer brute fact that the results of human activity cannot be anticipated.” That is not just economic theory. It is a job description for anyone operating beyond the edge of available data.
A Harvard Business School study ran a five-month trial giving 640 Kenyan entrepreneurs access to a GPT–4 business advisor. The overall effect on revenues and profits was zero. The entrepreneurs who were already performing well gained 10 to 15%. Those who were struggling did worse. The binding constraint was not the advice. It was the human who knew which advice to follow, which question to ask, which opportunity to ignore.
This is meaning-making in its broadest sense. The priest interpreting scripture for a grieving family. The journalist deciding what events signify. The founder explaining why this company should exist. The parent teaching a child what matters. The voter choosing what kind of society to live in. These are normative acts that require a being who cares, who has lived, who will die. AI processes syntax; humans generate meaning. Luciano Floridi calls AI “agency without intelligence.” You can embed values in a training run, but someone still has to decide which values to embed.
Even the engineers who build AI are making normative choices about what to optimise for. There are objectives inside objectives, all the way up. At the top, a person.
Being there for each other
Humans want other humans. That sentence sounds banal until you realise how much of the economy it explains.
When a hundred thousand people fill a stadium to watch Coldplay, they are not there for the sound quality. They could listen at home, in higher fidelity, for free. They are there to sing alongside strangers, to feel the bass in their ribs, to be part of something that only works because everyone showed up. The same thing happens at a three-hundred-person club gig, a local football match, a dinner party. The human presence is what is being consumed. No recording, no stream, no hologram can substitute for the fact of being in the room together.
This runs through work in the same way. The teacher who inspires a child to love mathematics does so because humans are inspired by other humans. The salesperson who builds trust over three years closes the deal because the client trusts them specifically. The nurse whose presence reassures a patient, even when a machine handles the diagnostics. The coach who pushes you because they know you and you know them. In each case, the human is not performing a function that could be replicated more efficiently. The human presence is the function.
The philosopher Martin Buber called it the I-Thou relationship: an encounter that requires two subjects, not a subject and a tool. What people seek in connection is the reality of being understood by another mortal being who has their own concerns and chose to show up anyway. AI companion apps have surged 700% since 2022, and the results are instructive: moderate use reduces loneliness about as effectively as talking to another person, but heavy daily use makes it worse. The more people try to automate connection, the more they demonstrate it requires a person.
The economist William Baumol noticed in the 1960s that certain services never become more productive because the labour is the product. A string quartet cannot play the piece faster without changing what it is. Economists treated this as a disease. It is the answer. As every other cost approaches zero, irreducibly human services absorb a growing share of the economy. Healthcare is 18% of US GDP and rising. Live music is growing at 6 to 9% annually while the cost of generating recorded music collapses. NielsenIQ tested AI-generated advertisements and found weaker memory activation across the board; audiences described them as “annoying, boring, confusing,” regardless of age or demographic. When the functional version gets cheap, the human version becomes premium.
Putting yourself on the line
In early 2024, Klarna made a dramatic bet. The Swedish payments company replaced much of its customer service operation with an AI assistant, which handled two-thirds of all chats within its first month. CEO Sebastian Siemiatkowski celebrated publicly: the AI was doing the work of 700 agents, resolution times had dropped, the savings were enormous. Klarna cut staff and pointed to the numbers. Then quality collapsed. Customer satisfaction scores fell. Klarna quietly began rehiring humans and acknowledged it had “focused too much on efficiency and cost.”
Amazon made a similar bet. It mandated that 80% of internal code be written by its AI tool Kiro, cut thousands of engineering roles as part of a 30,000-person restructuring, then suffered a 13-hour AWS outage and a 6-hour retail collapse costing an estimated 6.3 million orders. An internal memo reportedly referenced “GenAI-assisted changes” as a contributing factor. That bullet point was later deleted.
In both cases, the missing element was the same. Someone on the hook.
AI’s consequences are parameter updates: reversible, impersonal. Human consequences are careers, reputations, liberty. That gap does not close as AI improves. It widens. Michael Kremer’s O-Ring theory explains why: in complex systems, as most components become highly reliable, the remaining human steps concentrate all the risk. As AI handles 95% of a process, the 5% requiring human judgement becomes disproportionately valuable.
The surgeon who uses a robotic arm still has their career on the line if something goes wrong. The plumber who sends a robot to fix the roof still guarantees the work; you still need someone to call when it leaks. The lawyer who uses AI to draft the brief still puts their name on the advice; their licence is at stake if it’s wrong. The founder who uses AI to build the product still bets their savings and reputation. Even as AI handles more of the execution, the human warrants the outcome. AI can be the agent. It cannot be the principal.
The answer is in the question
In that Oregon auditorium, the audience listened to three pieces of music and got every attribution wrong. They were searching for technical mastery and assumed the machine had it. They missed what made Bach human: not the notes, but the reason for playing them.
We are making a version of the same error when we ask “what can AI do?” and let the list grow longer each month. The question is answerable, and the answer is genuinely impressive. But it is the wrong question. The better one is what we want to do next.
There are $106 trillion of infrastructure to build. A planet that needs rewilding. Diseases that need curing. Children who need teaching by humans who inspire them. Communities that need tending. Businesses that need founding by people willing to bet their name on something new. AI gives us the most powerful tools any civilisation has held. The work was never going to run out. The question was always whether we would build the businesses, create the wealth, and organise ourselves to do it.
I fear we could snatch defeat from the jaws of victory: waste this moment through poor policy and a failure of nerve. But when I look at the founders across the table, already building differently, already asking better questions, already using these tools to do things that would have taken ten people and two years just eighteen months ago, I suspect that once again, the builders will outrun the worriers.
They always do.