SuperSeed Loader

Building the AI-Native Organisation

“Only the paranoid survive,” Andy Grove wrote, after reinventing Intel in 1985. He was right. But Intel still got disrupted. It passed on GPUs, watched NVIDIA build a $4.6 trillion company on the opportunity, and by 2025 was worth 2% of its former rival. The difference between Grove’s Intel and today? That disruption took thirty years. The founders sitting across from me know they might have two.

This is the vertigo at the centre of every board meeting I sit in right now. A founder walks in having built something remarkable, knowing that the same tools she used to build it are available to every competitor, every incumbent, every twenty-two-year-old with a laptop and a subscription. AI is the most extraordinary capability any of us has ever had; the philosopher’s stone of productivity, available to everyone simultaneously. The excitement and the fear come from the same source, and anyone who tells you they’ve resolved that tension is lying to you or to themselves.

The headline number is Anthropic’s revenue: $1bn to $20bn in less than fifteen months. The interesting number is underneath. Anthropic quadrupled its engineering team last year. It also saw productivity per engineer triple. Four times the people, each producing three times as much. That is 12x the output capacity in twelve months, something few, if any, billion-dollar companies have ever managed.

The temptation is to treat this as an adoption problem: buy licenses, run pilots. And if you run a big organisation, perhaps appoint a Head of AI. The interesting change is where the bottleneck sits. The cost of building software, generating campaigns, testing hypotheses: that has collapsed towards zero. The human bottlenecks haven’t moved. Which sectors to enter. Which prospects to meet face to face. Which of the ten things you could build this week is actually worth building. The bottleneck has shifted from production to judgement, and most organisations are still designed around the old one.

The term “AI-native” gets used loosely, usually meaning “we use AI tools.” The organisations actually built for this era rest on four principles. Prototyping culture: everyone is empowered to build and test. Shared context: clear strategy and objectives that align effort, so speed produces signal rather than noise. Platform discipline: the shared infrastructure stays stable while experimentation runs freely above it. And human empowerment: every AI application exists to make humans more capable at what humans do best.

Let’s unpack the four.

Prototyping culture

Everyone gets the tools: product managers, designers, analysts (not just engineers, who will find this obvious). Instead of debating what to build, you build it and look.

In the portfolio companies where I’ve seen this take hold, the pattern is remarkably consistent. Distribute the tools widely and early; behaviour follows within weeks. People prototype because they can. The standing question becomes “Have you built it?” and the culture shifts faster than any strategy offsite could achieve.

The energy in these teams is different. A product question that would previously trigger a week of analysis triggers a prototype by end of day. A competitive threat that used to provoke a strategy meeting provokes a build. The cost of testing drops so far that the default becomes “build it and let’s see.”

Shared context

Prototyping culture gives everyone the capability to build. Shared context tells them what to build towards. Without it, fifty empowered people prototype fifty different things by Tuesday, and the organisation has produced nothing but noise.

This is the amplification problem. In a traditional organisation, misalignment costs weeks: someone builds the wrong thing slowly. In an AI-native organisation, misalignment costs days and produces volume. AI doesn’t distinguish between a well-directed prototype and an irrelevant one. It amplifies whatever direction it’s given, including no direction at all. Organisations without clear strategic context will find that AI simply scales their incoherence.

The parallel with AI itself is exact. Give a language model bad context and you get slop. Give it precise context and you get signal. The same applies to humans with AI tools. An engineer with clear strategic context prototypes something useful. The same engineer without it prototypes something impressive and irrelevant. Multiply that across a team of twenty and the gap between the well-directed organisation and the poorly-directed one widens by an order of magnitude.

In practice, this means the AI-native organisation needs its strategy, objectives, and priorities to be explicit, written down, and accessible to everyone who builds. The less time it takes someone to understand what the organisation is trying to achieve, the more valuable every prototype becomes.

Platform discipline

The more freedom you give at the application layer, the more stability you need underneath it. Prototyping culture means dozens of people building at speed. Platform discipline means they’re building on stable ground: shared databases, APIs, production systems, single sources of truth, all governed so that one team’s experiment doesn’t break another’s work.

This sounds like common sense. A recent lesson from Amazon suggests otherwise. The company mandated that 80% of code be written by its AI tool Kiro, cut thousands of engineering roles as part of a 30,000-person corporate restructuring, and then suffered a string of outages: a 13-hour AWS failure and a 6-hour retail collapse that cost an estimated 6.3 million orders. Amazon denies the connection. An internal memo reportedly referenced “GenAI-assisted changes” as a contributing pattern; that bullet point was deleted before wider circulation. Draw your own conclusions.

The principle scales down. A ten-person startup doesn’t have Amazon’s complexity, but it still needs to know which systems are shared and stable and which are free for experimentation. The database schema, the core API contracts, the deployment pipeline: these are the foundations. AI-generated applications sit on top. The discipline is knowing which layer you’re working on.

Human empowerment

Klarna’s AI assistant handled two-thirds of customer service chats within its first month; CEO Sebastian Siemiatkowski called it a triumph of efficiency. Then quality collapsed. Satisfaction scores dropped. Klarna quietly rehired humans and admitted it had “focused too much on efficiency and cost.”

Klarna had been optimising for removing people from the process. The entire opportunity was making people better at it. This is the principle: AI exists to empower humans, and every application should be measured by whether it makes them more capable.

This principle has a hard edge. AI will automate genuine drudgery: data entry, research synthesis, first-pass drafting. Some roles will change beyond recognition. Some will disappear. Human empowerment is a design principle for the organisation, not a guarantee for every current job. The test is whether the people who remain are doing more valuable, more fulfilling work than before.

In customer service, that means AI handles research and drafting while the human brings empathy and judgement to conversations that need it (the thing Klarna’s customers missed). In product design, it means generating twenty variations in an hour while the designer applies the taste that determines which one actually works. In enterprise sales, AI can research a prospect’s competitive landscape, regulatory environment, and technology stack in minutes; the human is freed to focus on relationships and sector knowledge, the things that close deals.

The same principle applies to the biggest decisions in the company. The prototyping culture generates ten directions before lunch. Shared context ensures they’re pointed at the same problem. Platform discipline ensures they don’t break anything on the way. The question that remains is which prototype deserves commitment: a real team, real resources, a real go-to-market plan. That requires exactly the kind of judgement no AI can supply. AI is always an agent. Only humans can be principals, and it is skin in the game that sharpens the mind.

Board conversations shift from conviction (“I believe this market is moving here”) to evidence (“We built three versions; here’s what we learned”). The founder’s instinct still matters. Now it gets tested in hours, and the question becomes “We built it. Should we deploy to production.” The founder’s job was always about judgement. AI compresses the cycle between hypothesis and evidence, which means that judgement gets exercised more frequently and on better information. The tempo has changed. The job hasn’t.

The companies getting this right are the ones where every employee has more leverage over their domain than they did a year ago. And the employees report higher job satisfaction, because more of their time goes to the work that requires their specific expertise, their taste, their relationships.

Grove’s paranoia kept Intel alive for a generation. The tempo has changed. Thirty years of breathing room has compressed into two, maybe less. But two years with these tools is a different proposition than two years in any previous era. The founders who’ve understood this are already building differently. The tools are distributed. The context is clear. The prototyping reflex is there. The judgement stays human. We’ve seen what the good ones can build in six months. Two years is plenty.

Related

Drag & Drop Files, Choose Files to Upload
Confirm your details
Name
Business type
Which of these areas best describes your company?