From the CEO: AI as Reclamation
Why mission-driven organizations should refuse the displacement story — and what to do instead
In the first months of 2026, Uber burned through its entire annual AI budget. Eight in ten IT leaders are reporting unexpected charges. Average enterprise AI spend rose 36% in a single year. And yet — at the same time — the actual cost of running AI inference has been falling to be as much as 1/10th year or year according to some calculations. We are, by some measures, in the middle of the most dramatic cost deflation in the history of computing.
So here is the question worth sitting with: if AI is getting that much cheaper that fast, why are companies firing people now?
The honest answer is that they're not firing people because of what AI costs today. They're firing people because of what AI is projected to cost — and more critically be worth — in three or four years. Gartner says inference costs will drop by 90% over the next four years. They might be right. But the workers being displaced today are not living in 2030. They're living in 2026, where the cost case isn't proven, the productivity gains are uneven, and an MIT study found that AI was only economically viable in 23% of vision-based roles — humans were cheaper in the other 77%.
This is the part of the AI conversation that almost no one is willing to say out loud: a great deal of what's being sold as inevitability is actually a bet — and the bet is being placed on someone else's chips.
I want to make a four-part argument about why that matters, and what mission-driven organizations should do instead. The argument runs through Karl Marx and William Morris, which I realize are not typical name-drops for consulting blog posts. But the people doing this work in the 19th century already saw the shape of what's happening now, and we'd be foolish not to use the maps they drew.
1. The honest cost case
Let me be clear about what I'm not arguing. I'm not arguing that AI shouldn't displace any work. I'm not arguing that automation is morally wrong, or that organizations have an obligation to keep every role they currently have. Markets are real. Cost structures are real. If a tool genuinely makes a category of work cheaper to produce at the same or better quality, then over time that work should shift to the tool. That's how technological progress has always worked.
The honest cost-benefit analysis for displacement is straightforward. You take the all-in cost of the current way of doing the work — wages, benefits, management overhead, training, error rates, time-to-completion. You compare it to the all-in cost of the AI-assisted version — compute, licensing, integration, error rates, the human supervision still required, the rework when the tool gets it wrong. If the AI version is meaningfully cheaper and meets your quality bar, you have a defensible case for the change.
That's the rule of the game inside the market frame. I have no quarrel with this rule. This rule and I are friends.
2. The cost case is unproven and still moving
The quarrel I have is that almost no one cutting jobs right now is doing that calculation with reliable inputs.
The compute cost numbers are wildly volatile. A well-optimized AI model running on dedicated hardware costs about half a cent per million tokens — tokens being the small chunks of text AI is billed by, where a million tokens is roughly the length of a short novel. But almost no one is actually running their own AI on their own hardware. They're buying it through APIs — connections to AI services hosted by cloud providers — which charge somewhere between sixty and two hundred and fifty times more than that theoretical floor. That gap covers real things: reliability, infrastructure, support, the cost of having capacity sitting idle for when you need it. But it's also a healthy profit margin, and a margin that compounds every time someone sends a prompt. Token-based pricing turns every inference into a recurring expense that scales with curiosity.The cheaper-by-default story assumes a price that almost no one is actually paying.
The productivity numbers are murkier still; the MIT study I mentioned is just one data point. Harvard Business Review has documented what they're calling "the productivity panic" — the finding that when AI makes tasks easier, people don't stop working. They just keep going. We put an electric motor on the hamster wheel and now we run faster, for longer, producing more stuff of marginally less and less value. Whether that constitutes a productivity gain depends entirely on what you think productivity is.
So here's where we actually are. Compute costs are falling fast but not the prices most enterprises pay. Productivity gains are real in some categories and illusory in others. The cost case for displacement might land in three to four years for a meaningful set of roles. Today, it doesn't land cleanly anywhere.
And yet displacement is happening at scale, justified by projections that haven't arrived. People are being asked to absorb the consequences of a future that isn't here yet, on the basis of a forecast that may or may not pan out. You don't get to gamble on human livelihoods and call it innovation. It's a bet, and the workers losing their jobs aren't the ones who placed it.
3. The market frame is just framework, not the whole game
This is the move where the argument turns, so I want to make it carefully.
I am not anti-market. As an entrepreneur, I’ve clearly claimed my camp. Market capitalism, as a system for coordinating economic activity at scale, has been one of the most powerful systems humans have ever built. It does things that other systems demonstrably cannot do. I work inside it. My firm exists inside it. The mission-driven organizations I serve operate inside it.
But the market is just a framework. It is a useful tool for one particular set of questions — namely, how do we efficiently allocate scarce resources — and it is genuinely good at those questions. It is not a tool for answering what is worth doing, or what kind of work makes a life, or what we owe each other. When we treat the market as if it were the whole game, we stop being able to see the things it doesn't measure.
This is where Karl Marx becomes useful — not as a political program, but as a diagnostician. Marx, writing in the 1840s, watched the Industrial Revolution take integrated handcraft work and shatter it into rote, fragmented, mass-produced tasks. A shoemaker who used to design, cut, stitch, and finish a pair of shoes became a person who hammered the same heel a thousand times a day. The work got cheaper to produce. It also got soulless. Marx called this alienation: the worker separated from the product they made, from the process of making it, from the people they made it with, and from their own creative essence as a maker.
You don't have to agree with anything else Marx ever wrote to recognize that he was describing something real. The fragmentation he observed didn't end in 1850. It became the dominant logic of how work got organized in the 20th century. Most of us are doing alienated labor right now. Most of us know it.
William Morris, working in England a generation later, looked at the same fragmentation and asked a different question. Morris was a designer, a poet, a printer, and a socialist of a particularly unusual kind — one whose primary concern was not who owned the factories but whether the work itself was worth doing. Morris asked what work would look like if it were arranged so that human beings could find pleasure in it. He believed that craft — integrated, skilled, judgment-laden work — was something every person had a right to. The Arts and Crafts movement that grew from his thinking has lasted for a century and a half because Morris was naming something that the market frame couldn't name: the idea that work is supposed to do something for the worker, and that good work was an end unto itself.
So here is the hinge of this essay: there is a categorical, qualitative, and moral difference between the work that should be automated and the work that shouldn't.
The work that should be automated is the work that capitalism alienated us into in the first place. Rote tasks. Repetitive, determinative work. The kind of fragmented sub-step that exists not because anyone wanted it to exist as a job, but because someone broke a whole craft into pieces small enough to hire out to cheap, largely unskilled labor. The data entry. The form processing. The fifth iteration of the same standard report. The routine reconciliation. The work that, if you're honest, no one would design from scratch as a meaningful human role — but which we keep doing because it's currently profitable to do it.
The work that shouldn't be automated — the work where automation is a category error — is the work of judgment, creativity, taste, and care. The counselor sitting with a client. The teacher reading a room. The pastor at a hospital bedside. The case manager weighing a family's specific situation. The development officer who can sense when a donor is ready to be asked. The strategist who can tell which of two technically-correct paths is actually right for this organization at this moment. This work is not slow versions of work AI could do faster, but rather a different kind of work entirely. The market frame, on its own, can't tell you that — because the market frame measures throughput, not judgment.
This is the move that mission-driven organizations are uniquely positioned to make. For nonprofits, faith-based organizations, higher education, public sector, veteran-serving organizations — the people doing the work are not overhead. They are the mission expressing itself. Counseling is the mission. Teaching is the mission. Casework is the mission. The temptation to treat these roles as cost centers to be optimized away is a category error that will erode the thing you exist to do, dressed up as efficiency.
4. AI as reclamation, not subtraction
Here is the part I find genuinely hopeful, and I want to land it cleanly.
AI is very good at the work that capitalism alienated us into. The aforementioned rote, determinative, fragmented work — AI tools can absorb a great deal of that work, and absorb it well. This is not a bug. This is what the technology is genuinely best at.
Which means we have, right now, a rare and specific opportunity. For the first time in roughly two centuries, we can give the alienated work back to machines and reclaim the rest for humans. We can use this moment to undo a piece of what the Industrial Revolution did. The rote work goes to the AI. The judgment, creativity, taste, care, innovation, and — to borrow Morris's word — the pleasure in the work, comes back to the people doing it.
This is not the future that arrives by default. The default future is the one we're already watching unfold: cut the humans, keep the rote, monetize the fragments, deepen the alienation. That's not AI's fault — AI is a tool, not a destiny. That's a choice organizations are making, often without realizing they're making it, often justified by cost projection models that haven't matured.
The reclamation future requires the same tool but a different choice. It requires organizations to ask: which work in our org is the rote, fragmented, alienated work, and how do we route it to AI? Which work in our org is the judgment-and-care work, and how do we protect the people who do it — and free them to do more of it, better?
Mission-driven organizations should be leading this. Not because they are morally superior, but because their work cannot be reduced to rote without destroying what they exist to do. That’s why they are nonprofit enterprises in the first place. They can't afford the subtraction story. The leverage story is also the only story that doesn't erode their mission.
So I want to come back to the line that started the train of thought that ended in this essay. A few weeks ago, Trevor Noah quoted someone — I don't know who — who said that any company using AI to fire people is a company that has run out of ideas. I think that's almost right, but I want to push it one notch harder.
Replacing people with AI doesn't just mean you've run out of ideas. It means you've decided you don't need any more ideas — because ideas come from humans. The enterprise that has decided it needs no new ideas has a terminal diagnosis, however long it takes to to finally die.
That's the choice. Not whether to use AI; we are all going to use AI. The choice is whether you are using it to subtract people from your work, or to give them back the parts of their work that capitalism took.
The honest cost case might land. The projections might pan out. None of that resolves the question of what kind of organization you want to be when the dust settles, or what kind of work you want the humans in your care to be doing. That question is yours. The market won't answer it for you. It was never going to.