From Efficiency to Mission: What the AI Revolution Actually Needs
Why the Future of Work is Impact, Not Optimization
Something strange is happening to the economics of purpose.
Andreessen Horowitz just released their Big Ideas for 2026, and as one recap put it, “AI is becoming the execution layer of the economy.” Systems that act. Infrastructure built for agents. Companies that replace entire workflows instead of assisting them.
They’re right about the technical trajectory. But there’s a glaring absence in this vision, one that explains why so many AI projects are failing right now: efficiency without purpose doesn’t sustain human engagement.
The industrial revolution was about efficiency in making. The AI revolution is about efficiency in deciding.
The a16z playbook leans heavily toward optimization—faster execution, automated workflows, agent coordination. The list is heavy on execution infrastructure and light on the motivation layer that determines adoption. That’s the central problem. It determines whether these systems create value or become expensive experiments that never ship.
We may or may not be in an AI financial bubble. But we’re definitely trapped in something harder to escape: an ideological black hole where efficiency exerts so much gravity that we’re being pulled toward a center that will crush us if we don’t change direction.
Two Efficiency Revolutions
The industrial revolution was about efficiency in making. Faster looms, better engines, optimized assembly lines. We got remarkably good at producing things.
The AI revolution is about efficiency in deciding. Faster analysis, automated judgment, optimized choices. We’re getting remarkably good at outsourcing the cognitive work that used to require human attention.
Different kind of efficiency. Different caution required.
Woodrow Hartzog and Jessica Silbey, in their recent paper “How AI Destroys Institutions,” argue that AI’s core affordances—undermining expertise, short-circuiting decision-making, isolating humans—systematically destroy the civic institutions that democratic life depends on. Universities, journalism, the rule of law itself: all require human judgment, human connection, the slow accumulation of expertise that AI promises to bypass.
Their warning is stark: “AI systems are a death sentence for civic institutions, and we should treat them as such.”
The industrial revolution’s efficiency gains could be brutal—child labor, environmental devastation, communities destroyed—but the damage was visible. You could see the factory smoke. You could count the injuries. The AI revolution’s damage is harder to see because it hollows out institutions from the inside. By the time the expertise has atrophied, the decision-making has ossified, and the human connections have frayed, the institution looks the same from outside. It just doesn’t work anymore.
This is why mission matters so much to how we build AI tools. Efficiency of production can be optimized without asking “toward what?” Efficiency of decision-making cannot. When we automate judgment, we embed values into the system—and if efficiency is the only value we embed, we get institutions that optimize themselves into irrelevance.
People don't resist capability—they resist meaninglessness.
In companies, this shows up as adoption failure. The system works. The demo lands. Usage stays shallow. People don’t resist capability—they resist meaninglessness. When organizations respond to low adoption with mandates rather than inquiry, they get compliance without buy-in. Usage numbers rise. Engagement depth falls. The gap between reported metrics and actual value widens—but by then, the people who would have flagged the gap are gone.
Why Mission Matters
I know this isn’t theory because I’ve felt the difference.
For years I’ve volunteered at Fleisher Art Memorial, a community art center in Philadelphia that’s been offering free and low-cost art education since 1898. The work isn’t glamorous—helping with programs, supporting students, showing up consistently. But I’ve had more impact and satisfaction from that work than from most of my tech career.
Because the mission was clear. I could see who I was helping. The feedback loop between effort and impact was immediate and visible.
Steven Byrnes’ research on social instincts offers one lens for understanding why. His model suggests we seek recognition not for being efficient, but for making a difference—what he calls “Approval Reward.” He also describes a “compassion/spite circuit”: the experience that helping others feels meaningful, not just intellectually but viscerally. Every one of those circuits fired when I watched someone discover they could make something meaningful.
Compare that to optimizing conversion funnels for a SaaS product. The technical challenge might be more complex. The compensation is certainly better. But the motivational circuits stay quiet. The approval I was seeking came from metrics dashboards and quarterly reviews—abstractions that don’t engage the parts of us that evolved to care about each other.
This is why efficiency as our only societal value is so dangerous. When we optimize decision-making at scale—across industries, institutions, and communities—without asking “toward what?”, we build systems that technically function but fail to engage human purpose. The institutions still run. They just only matter to the people running them, with limited impact on the society they were meant to serve. AI handles the efficiency. But if humans aren’t providing the caring, the economics have nothing meaningful to follow.
The “Designing for Agents” Problem
One of the a16z ideas illustrates the blind spot perfectly: Stephenie Zhang argues that founders should be “designing for agents, not humans.”
“As agents take over retrieval and interpretation, visual design becomes less central to comprehension... We’re no longer designing for humans, but for agents. The new optimization isn’t for visual hierarchy, but for machine legibility.”
Technically correct. Strategically dangerous.
Yes, agents will increasingly be the first consumers of information. Yes, machine legibility matters more for agent-mediated workflows. But treating human experience as secondary misunderstands what agents are for.
"The missing layer is explicit purpose: efficiency toward what? Faster execution of what? Automated workflows for whom?"
Agents serve human purposes. If humans don’t see value in what agents produce—if the experience of working with agent-mediated systems doesn’t connect to human meaning and motivation—then adoption collapses. You can build the most elegant agent infrastructure in the world, and it will gather dust if humans don’t feel that it matters.
The problem isn’t the technical architecture. It’s the absence of mission.
The Approval Crisis
Here’s where Byrnes’ research becomes uncomfortably relevant to the tech industry itself.
Remember that Approval Reward model—the drive to seek recognition from people we admire? Tech founders aren’t exempt. And over the past two decades, the people tech founders learned to seek approval from weren’t users, weren’t communities, weren’t the people their products ostensibly served. They were venture capitalists.
This isn’t a moral failing. It’s an economic reality that shaped an entire culture. When your company’s survival depends on the next funding round, you optimize for what VCs reward. And VCs reward efficiency metrics, growth curves, and market capture—not human flourishing.
The a16z list is the purest expression of this approval structure. Fifteen ideas, all optimized for what makes VCs excited: execution speed, agent infrastructure, automated workflows. Nothing about what makes humans excited. Because end users aren’t the audience. Investors are.
We’ve created a gravity well that’s hard to escape.
The few will dominate the rails. The many will thrive in the spaces the rails open up.
Tech builds tools that optimize for investor approval. Those tools get funded and celebrated. Success gets defined by what investors reward—efficiency metrics, growth curves, market capture. Each cycle narrows the imagination further, pulling us closer to the center.
Financial bubbles pop. Capital gets reallocated. The market corrects. Ideological capture is harder to escape. When an entire industry’s collective imagination narrows to a single value—efficiency—it stops being able to see alternatives. The gravity becomes inescapable.
This is where we are. The a16z list doesn’t just reflect VC incentives; it reflects a framing where execution is treated as the primary problem. The missing layer is explicit purpose: efficiency toward what? Faster execution of what? Automated workflows for whom?
When you can’t articulate the question, you’re past a bubble. You’re in a black hole where the exit isn’t visible from inside.
The Railroad Parallel
There’s a historical precedent worth examining. In the late 19th century, a handful of railroad companies consolidated control over national transportation. The Vanderbilts, the Harrimans, the Hills—these titans built empires by connecting distant markets, creating the infrastructure of a national economy.
But here’s what’s often overlooked: the real transformation happened locally. The railroads created the conditions, but local merchants, regional manufacturers, and community institutions captured the actual value. Small towns became cities. Regional specialties became national brands. The few dominated the rails, but the many thrived in the spaces the rails opened up.
Same pattern emerging with AI.
The FAANG companies—and their successors in the AI race—are building the infrastructure. They control the rails. A handful of companies will dominate AI infrastructure, foundation models, and the platforms that run on them.
But the real change, the change that matters to human flourishing, will happen locally. In communities, in specialized domains, in the particular contexts where standardized AI can’t quite reach. This is where mission becomes king.
If Byrnes’ model holds, this motivation is strongest when we can see the impact on people we know, in communities we’re part of, on problems we directly observe. National scale efficiency gains don’t engage this drive. Local impact does.
Bringing Mission to the Table
This creates a strategic opportunity that inverts the usual tech playbook. Build for local impact instead of national reach. The local is exactly where AI-augmented work creates the most human value.
But breaking the cycle requires redirecting where founders seek approval.
The a16z list is written by venture capitalists for founders. The incentives are clear: build things that scale, capture markets, generate returns. These incentives produced the last generation of technology companies—and also produced the social media platforms that optimized for engagement over wellbeing, the gig economy platforms that extracted value from workers, the advertising-driven models that monetized attention without creating meaning. Each one a triumph of VC approval metrics and a failure of human impact.
What’s missing is the nonprofit and mission-driven perspective—and more fundamentally, a different approval structure entirely.
AI doesn't replace human purpose—it makes purpose the primary competitive advantage.
Organizations focused on social impact have different design instincts. They think first about outcomes for beneficiaries, not growth metrics for investors. They’re practiced at measuring impact, not engagement. They understand that human motivation doesn’t follow efficiency curves.
Nonprofits shouldn’t build all AI systems. But the people designing AI systems should be informed by mission-driven thinking. When you’re designing “the execution layer of the economy,” you need input from people whose primary goal is human flourishing.
The current AI project failure rate isn’t a technical problem. It’s a design problem that comes from optimizing for the wrong objective function.
Outcome-Based Models and the Nonprofit Question
This shift toward impact-driven work suggests something interesting about business models.
Outcome-based pricing—where you pay for results instead of usage—aligns naturally with mission-driven work. Sierra’s model, where customers pay for resolved issues instead of API calls, represents early movement in this direction. The implications go further.
Consider what we traditionally call “nonprofit” work—organizations focused on social impact over profit maximization. These organizations have always struggled with the efficiency-versus-mission tradeoff. Resources spent on operations were resources not spent on impact. The constant question: how do we do more good with less overhead?
AI dissolves this tension. When efficiency becomes cheap, organizations can focus almost entirely on impact. The operational burden that distinguished for-profit from nonprofit becomes negligible. What remains is the mission itself.
This suggests a convergence. As outcome-based models spread and AI handles operational efficiency, the distinction between “business” and “nonprofit” becomes less about structure and more about purpose. The question isn’t whether you’re optimizing for profit—it’s what impact you’re trying to achieve and how you’re measuring it.
Training AI on What Actually Motivates Us
There’s a final consideration that makes impact-focused work genuinely important: we’re training AI systems on human values, and efficiency alone is an impoverished training signal.
When AI systems are trained on human behavior, they extract patterns from what we optimize for. If the training data only reflects efficiency—speed, scale, cost reduction—then efficiency is what the systems will be optimized to produce. Human motivation is richer than efficiency. If models like Byrnes’ Approval Reward and compassion/spite circuits are directionally correct, impact on others matters to us at a deep level—not just as belief, but as felt experience.
Not by building faster, but by remembering what we were building toward.
Impact-driven work generates different training signals. When humans work on things that matter to communities, that align with their neurological motivation toward caring about others, they produce data about what flourishing looks like. They demonstrate preferences that go beyond mere optimization.
This matters because the AI systems we’re building now will shape the AI systems we build later. If we want AI systems that reflect human values—not just human efficiency—we need to demonstrate those values in our work. Impact-driven organizations aren’t just doing good; they’re contributing to the training data that shapes what patterns AI systems extract and reproduce.
The Shift
We’re at an inflection point. AI is becoming the execution layer of the economy—a16z is right about that. The strategic question is what we execute toward.
The neuroscience suggests an answer: work that engages our deep motivation circuits, work where we can see impact on people we care about, work that earns approval for making a difference.
The economics suggest an opportunity: local impact, outcome-based models, the convergence of mission and margin as efficiency becomes commodity.
The historical parallel suggests a timeline: the railroads consolidated quickly, but the local transformation took a generation. We may be at the beginning of a similar arc.
And the narrowing imagination suggests an urgency: we need mission-driven influence at the design table now, not after we’ve built another generation of systems that optimize for the wrong objectives. The gravity is getting stronger—and the way out is remembering who technology was supposed to serve in the first place.
The age of efficiency optimization served us well—for production. But we’re now optimizing decision-making, and that requires something efficiency alone cannot provide: purpose. AI doesn’t replace human purpose—it makes purpose the primary competitive advantage, the thing that determines whether our institutions survive or hollow out from within.
The few will dominate the rails. The many will thrive in the spaces the rails open up. Mission becomes king not despite AI, but because of it.
If the real value is local, then the missing participant in AI conversations is obvious: the leaders closest to beneficiaries and institutions. They’re still not central to most discussions about how these systems get built. Next, I’ll write about the translation gap between mission and technology—and the concrete practices that close it.
And maybe—just maybe—that’s how we escape the black hole. Not by building faster, but by remembering what we were building toward.
This is part of the Designing Intelligence series, exploring how human and machine intelligence can evolve together through strategic design, stewardship, and thoughtful integration.
Sources and Further Reading:
Andreessen Horowitz, “Big Ideas 2026: Part 1,” a16z, December 2025. https://a16z.com/newsletter/big-ideas-2026-part-1/
Ruben Dominguez, “a16z Partners Just Laid Out the AI Playbook for 2026,” The AI Corner, December 2025.
Steven Byrnes, “Social drives 2: ‘Approval Reward’, from norm-enforcement to status-seeking,” LessWrong, November 2025.
Steven Byrnes, “Neuroscience of human social instincts: a sketch,” LessWrong, November 2024.
Woodrow Hartzog and Jessica Silbey, “How AI Destroys Institutions,” Boston University School of Law, 2025.










