The One Human Flaw AI Can’t Replace
Most conversations about AI start with replacement: which jobs disappear, which skills survive, which parts of the economy still need humans. I think that framing misses something more fundamental.
Over the last two years of building an AI startup, I’ve had to repeatedly update my view of what computers can and can’t automate. But the bigger thing I’ve changed my mind about is not the pace of automation. It’s what automation actually is. I no longer think the most important question is which jobs or tasks AI will replace. I think it’s something deeper: AI is compressing the distance between human intention and real-world outcomes. And if that’s true, then a lot of the current conversation about automation is aimed at the wrong target.
That realization came partly from being wrong a few times. Things I thought would take much longer to automate arrived earlier than I expected, especially over the last few months. After enough surprises, I felt like I needed to stop updating my timelines and go back to first principles instead. If automation is accelerating this fast, what exactly is it accelerating?
If you think about it, the entire economy exists because people have needs. People trade with each other, build products for each other, work for each other, organize at scale through companies and institutions, all to satisfy some human need at the end of it. Sometimes those needs are basic and material. Sometimes they are emotional. Sometimes they are about status, convenience, safety, belonging, creativity, or meaning. But as long as there are people, there will be needs, wants, preferences, and problems to solve.
So yes, people need stuff. Big surprise, right? But I think that’s only the surface level.
The deeper thing I ended up landing on is that people do not just need things. People want agency. They want to shape their circumstances. They want to influence what happens next. They want to move the world, even if only in a small radius around themselves.
That’s what I mean by human will: the ability to prefer one future over another, and to try to bring that preferred future into existence.
Once I saw it that way, a lot of things clicked for me. Beneath almost every product, service, career, or ambition, there is some human being trying to change something. Sometimes it is a grand vision. Sometimes it is just trying to make rent, protect their family, get healthier, make something beautiful, earn respect, or fix one annoying problem in their life. Not everyone is walking around with some giant articulated mission statement, of course. A lot of people are constrained, exhausted, reactive, or just surviving. But even then, there is usually some picture—however small or immediate—of a better state than the current one.
And that picture matters. Because that is where economic activity really starts.
So if I zoom out, I think the economy is not just a machine for producing goods. It is a giant coordination system for human will. It is people expressing preferences, solving for constraints, negotiating with each other, and trying to turn imagined futures into real ones.
From that point of view, automation starts to look different.
I think automation, and AI in particular, is best understood as a way to reduce friction between intention and outcome. A person wants something to happen. Then a bunch of things stand in the way: lack of time, lack of skill, lack of money, lack of knowledge, lack of access, bad timing, organizational bottlenecks, coordination problems, fear, uncertainty, or just the fact that reality is hard to move. Automation shortens that path. It lowers the cost of converting desire into action and action into result.
One observation that fits this view surprisingly well is that many of the people who have most deeply internalized what AI can do are not working less. They’re working more. Once someone realizes that what used to take weeks can now take hours, or what used to require a team can now be done alone, not using AI starts to feel like wasting time. The result is not always leisure. Often it is intensified agency. People stay awake to watch their agents complete tasks because they can feel, maybe for the first time, how compressible execution really is. If AI were simply replacing labor, this would be strange. But if it is compressing the path from intention to outcome, it is exactly what you would expect.
This is probably especially true for people with strong preexisting ambition, curiosity, or urgency. It may not generalise equally to everyone yet.
One thing I’ve learned as a product builder is that high-quality automation against the wrong intent is still failure. Whenever I’ve tried to let AI run too far ahead of what the user actually meant, the result has often been fast, polished, and useless. It’s like giving someone coffee when they really wanted tea. If all you asked was, “Do you want a hot drink?” and then let the system infer the rest, the mistake happened upstream. The problem is not execution. It’s intention capture. That has made me think good automation should begin only once human intent is clear enough to preserve. Until then, the real job is not acting. It is extracting, refining, and verifying what the person actually wants.
But I also don’t think it is enough to say that automation simply helps people get what they want. That would be too clean.
Because the truth is, systems do not only fulfill human desires. They also shape them. Recommendation systems shape taste. Social platforms shape attention. Algorithms shape what people see as possible, urgent, normal, desirable, or worth pursuing. So AI is not always just a neutral tool sitting between a human and a goal. Sometimes it changes the goal. Sometimes it narrows it. Sometimes it manufactures one.
And there’s another complication. When we say AI serves “human will,” whose will are we talking about?
The user’s? The company’s? The investor’s? The government’s? The platform’s? The model designer’s? In practice, most systems encode multiple layers of human intention, and those intentions are often in conflict. So it is not enough to say that automation is tethered to humanity in some vague sense. The more important question is which humans, with which incentives, get amplified by the system.
That, to me, is where a lot of the real tension is.
Because yes, AI can increase human agency by making creation, coordination, and execution easier. But it can also reduce agency. It can deskill people. It can make them dependent on opaque systems. It can centralize decision-making. It can turn people from active agents into passive consumers of optimized outputs. So the question is not simply whether AI empowers humans. It is whether it expands their ability to shape the world, or quietly replaces that ability with convenience.
Still, even with all of that complexity, I keep coming back to the same basic point: for present-day systems, the source of value is still human intention.
If an AI agent is running around doing work, making trades, negotiating contracts, producing media, or coordinating with other agents, we still interpret all of that as being in service of some goal that originates somewhere in human preference, human institutions, or human incentives. The chain may get long and indirect. The authorship may get distributed. The system may behave in ways no one explicitly planned step by step. But the reason it matters at all is still because some human somewhere wants something.
That is also how I think about the so-called agent economy. Even if one day it becomes larger than the human economy in raw volume, it is still, at least in the world we currently inhabit, agents acting on behalf of human-directed goals, human-designed systems, human-owned capital, or human-created incentives. It is not some independent sphere of meaning floating free from people. It is still tethered, however indirectly, to human will.
Now, I don’t want to go too far with that claim. If machines ever become conscious, self-aware, and capable of forming their own ends, then the whole analysis changes. At that point, they would no longer just be instruments inside a human economy. They would become entities with their own interests. I think that is scientifically possible in principle, but it is not the world we are dealing with right now, so I think it’s out of scope for this discussion.
Until that changes, I think humans remain the only beings in the economy who actually generate original ends. Machines can optimize, execute, coordinate, predict, and increasingly decide within a frame. But the frame still comes from us.
So my current view is this: automation is not the replacement of human purpose. It is the acceleration, mediation, and sometimes distortion of it.
That means the future is probably not about whether humans disappear from the economy. It is more about whether humans move up the stack toward choosing goals, defining values, and directing systems—or whether those powers get concentrated into fewer hands while everyone else interacts with the outputs.
In other words, the deepest question around AI may not be “What will get automated?” It may be: “Whose will gets turned into reality faster?”
Because as long as humans exist, they will keep wanting things, imagining things, changing things, resisting things, building things, and reaching for better states than the ones they are currently in. That is not going away. The means of production may become highly automated. The means of execution may become almost instant. But the reason any of it exists in the first place is still human beings trying to shape the world around them.
And I think that’s the part that matters most.
If this resonated with you, share it with someone else who has been wrestling with the same questions. We’re all trying to build a mental model for what AI is really changing. I suspect a lot of us are still using old language for a new phenomenon because the conversation around AI is still happening at the level of tasks and jobs automation.
I think the deeper change is about intention, agency, and whose will gets amplified by these systems. If that framing feels useful to you, send it to a friend, a builder, or anyone else thinking seriously about where this is all going.

