Skip to content

The SaaSpocalypse

By Bri Stanback 12 min read

Software stocks just had their worst week since April. The market is probably overreacting. But the anxiety driving the selloff? That's been building for a year. AI is just forcing the conversation into the open.

I was watching the ticker on Tuesday morning when Thomson Reuters dropped 15.8% in an hour. Then LegalZoom, nearly 20%. I kept refreshing, the way you do when something feels like it might matter. By noon, $285 billion had evaporated from software, financial services, and legal tech. They're calling it the "SaaSpocalypse."

The trigger was Anthropic's Cowork plugins — but the trigger isn't the interesting part. What caught me was the feeling in my chest while I watched. Relief that the conversation is finally public. Dread that it's real enough to price. The skills I spent a decade building are becoming... not worthless, but different. Cheaper. More abundant. The thing that made me valuable is now something I coordinate rather than do.

Is it an overreaction? Probably. When DeepSeek triggered a similar panic last year, Nvidia lost $600 billion in a day. A year later, the feared disruption never materialized. Markets overcorrect. Fear spreads faster than fundamentals.

But here's what's different this time: the conversation is finally public. The anxiety that builders have felt privately for a year — "wait, can AI actually do my job?" — is now being priced into the market. Wall Street is catching up to what anyone using Claude Code already knew: execution is getting cheaper. Fast.

"In 2026, writing code is no longer the hard part. AI can generate features, refactor services, and accelerate delivery at scale. Speed is now expected, not a differentiator. What AI removed is friction, not responsibility." — Security Boulevard

This lands differently when you're living it.


#From Vibe Coding to Vibe Working

Andrej Karpathy coined "vibe coding" in February 2025: "fully give in to the vibes, embrace exponentials, and forget that the code even exists." Steve Yegge took it further, describing work as fluid—"an uncountable substance that you sling around freely, like slopping shiny fish into wooden barrels at the docks."

"Some bugs get fixed 2 or 3 times, and someone has to pick the winner. Other fixes get lost. Designs go missing and need to be redone. It doesn't matter, because you are churning forward relentlessly on huge, huge piles of work." — Steve Yegge, "Welcome to Gas Town"

This sounds chaotic. It is chaotic. But it's also the emerging reality for anyone running AI at scale.

Now the term is spreading beyond engineering. Anthropic's Scott White announced Opus 4.6 this week with a new framing: vibe working.

"Everybody has seen this transformation happen with software engineering in the last year and a half, where vibe coding started to exist as a concept... I think that we are now transitioning almost into vibe working."

Anthropic demonstrated this with Cowork—the plugin system that triggered the selloff—built in about ten days. "@claudeai wrote Cowork," their PM Felix Rieseberg confirmed. They vibe coded an enterprise integration layer. OpenAI followed with Frontier, their competing platform for deploying AI agents in enterprises.

Microsoft branded it Agent Mode in Excel and Word—describe tasks in plain language and the AI handles the work. But how "agentic" is it really? Their own guidance emphasizes "least-privileged access" and "small, incremental expansions of responsibility." It's more AI-assisted features than autonomous agents.

Google's approach is similar: Gemini in Workspace adds AI to Sheets, Docs, Gmail—useful, but not transformative. Where Google gets more interesting is Gemini Enterprise, which lets you build custom AI agents with permissions-aware access to your data. Notion teased the same: custom agents to automate different workflows, coming soon.

Salesforce went furthest on marketing: Agentforce 360 is their platform for "the Agentic Enterprise," and they shipped a feature called Agentforce Vibes—letting builders "vibe-code" apps grounded in company data. They're reporting 119% agent growth in the first half of 2025.

The shift isn't from "engineer" to "prompt engineer." That's too small. The shift is from maker to orchestrator. From building to coordinating. From depth to breadth. And it's not just for developers anymore—it's coming for every knowledge worker.

But a year in, the lesson isn't pure chaos. LLMs are genuinely good—and getting better—at greenfield work: new projects, clear requirements, blank slates. They struggle with brownfield: existing codebases, implicit conventions, accumulated context. The answer isn't to reject vibe coding. It's to harness it—add constraints, guardrails, structure. Without them, you get tech debt at AI speed, errors faster than humans can review, and code that works but nobody understands why.

The physics I explore in Code Owns Truth point toward the same conclusion: constraints are the design layer. Prompts express intent. Code owns truth. The vibe is real, but the vibe needs boundaries.


#The Ecosystem Shakeout

So I started looking at what's actually getting disrupted, and the pattern is clearer than the market panic suggests.

#Ticketing: Adapt or Replace?

The ticketing systems that track human work are racing to become platforms for AI work.

Atlassian launched Rovo—AI agents inside Jira that triage tickets and route work. They're treating agents as a feature layer on top of existing workflows.

Linear went further with "Linear for Agents"—AI as full workspace members, assigned to issues, @mentioned in comments. The human remains "primary assignee" while the agent is a "contributor." Accountability preserved, execution delegated.

Meanwhile, new tools like Beads skip the adaptation entirely—built for agents from scratch, no legacy assumptions about human workflows.

The question: Do you adapt existing tools for AI, or build new tools for an AI-first world? Linear's hybrid might win the transition. AI-native tools might win the destination. Jira's adding AI to human bureaucracy—that's a harder pivot.

#Automation: Zapier vs. n8n vs. Claude Cowork

The workflow automation platforms face an existential question: What happens when AI can just do the thing?

Zapier's response: lean into it. They shipped Agent Skills for Claude—MCP integrations that let Claude trigger Zapier automations across 8,000+ apps. They achieved 89% AI adoption internally with 800+ agents deployed. Their strategy: become the glue between AI and everything else.

n8n is betting on hybrid workflows—AI for the intelligence, n8n for the plumbing. Claude generates n8n workflows. n8n connects to everything. The platform becomes an orchestration layer that AI writes to.

But here's the threat: Claude Cowork doesn't need Zapier. If the AI can directly access APIs, authenticate with services, and execute multi-step workflows autonomously—why route through a middleman?

The automation platforms survive if they become connectors (the authentication and API glue that AI uses), guardrails (human-in-the-loop checkpoints for risky operations), or monitoring (observability for what agents are doing). They don't survive as "no-code" tools for humans who can't code. That market is evaporating.

#The Pattern

Tools survive by becoming either infrastructure (AI needs you) or judgment aggregators (humans need you for decisions AI can't make). Everything in the middle—tools that automate what AI now does natively—faces compression.

Core infrastructure isn't going anywhere: AWS, GCP, Cloudflare, Fly.io. Context repositories become more valuable, not less: Notion, Confluence, wikis — AI needs business context, decisions, history.

Single-function SaaS is in trouble. If Claude can do your core function, you're a feature now. "No-code for humans" is in trouble — the target user can now just ask AI. Expensive human expertise platforms — legal research, financial analysis — that's exactly what Cowork demonstrated.

The uncertain middle is wide: observability platforms built for human dashboards, GitHub competing with its own Copilot, Slack wondering if it becomes an agent coordination hub or gets bypassed entirely, Jira carrying too much legacy to pivot fast but too much lock-in to die quickly.

#The Forward Deployed Engineer Illusion

There's a role that's been trending in enterprise sales: the "Forward Deployed Engineer." Palantir pioneered it—embed engineers at customer sites to handle the complex integration work that software alone can't solve. The pitch: enterprise systems are too messy, too customized, too entangled for self-service. You need humans on the ground.

I think this is temporary.

Every major AI company is racing to be the enterprise agent layer—Cowork, Frontier, Copilot Studio. And the integration messiness that justifies FDEs is exactly what vibe working dissolves.

From my own experience: Claude Code is remarkably good at reading API documentation—even poorly written ones—and using existing CLI tools to explore, investigate, triage, and connect systems together. It's not perfect for large datasets or deeply stateful processes, but it's improving fast.

The contrast with browser automation is stark. Amazon's AGI Lab published research on why browser use is so hard for AI agents: "Multiplication of uncertainties is the killer of reliability." Each step—perception, actuation, page load—might succeed only a certain percentage of the time. Multiply those probabilities across a multi-step task and reliability tanks. The WebArena benchmark shows even top models achieve only 35.8% success rates on real-world web tasks.

But CLIs and APIs? Those are deterministic. The "complex enterprise context" that used to require months of on-site engineering becomes a conversation. Claude Code doesn't care that your Salesforce instance has 47 custom fields and a decade of technical debt. It reads the schema, understands the constraints, and builds the integration.

Maybe I'm naive to enterprise. But I don't think integration points remain messy for long when AI can read your existing codebase and infer conventions, generate adapters between incompatible systems, and handle the long tail of edge cases that used to require human judgment.

The FDE model assumes that understanding a customer's systems is hard work that scales linearly with headcount. Vibe working makes it scale with inference. The engineers who spent years learning one customer's Byzantine internal systems? That knowledge moat is evaporating.

#Who's Winning the Vibe Coding Wars?

The AI coding tools have stratified fast. 85% of developers now use AI tools regularly, with Claude Code, Cursor, and Codex fighting for dominance. Each is racing to ship multi-agent orchestration—the ability to spawn and coordinate multiple AI agents on a single task.

The pattern: Anthropic is winning on integration, Cursor on UX, OpenAI on raw capability, Google on.. patience. But every provider wants to be the platform, not just the model. Anthropic cracked down on third-party harnesses last month—the message: flat-rate pricing requires their tools.

For the full breakdown of what's shipping and how to choose, see The Multi-Agent Moment.

#The Subsidy Question

Here's the uncomfortable math: current AI pricing is subsidized by investor capital, not sustainable economics.

OpenAI spent $22 billion in 2025 against $13 billion in revenue—$1.69 for every dollar earned. They project $74 billion in operating losses in 2028 alone, with cumulative cash burn reaching $115 billion through 2029. The bet: hit $200 billion in revenue by 2030 and turn profitable then.

Anthropic is more disciplined. Their cash burn is projected to drop to one-third of revenue in 2026 and just 9% by 2027, with break-even expected in 2028. They're avoiding expensive video and image generation, focusing on corporate customers (80% of revenue).

What does this mean for builders?

The current pricing is artificially cheap. API costs reflect what investors are willing to subsidize, not what the compute actually costs. When the subsidy ends—through profitability pressure, funding crunches, or market corrections—prices go up.

Lock-in gets more expensive over time. If you build deep dependencies on one provider's cheap API pricing, you're exposed when they need to raise prices. Anthropic's harness crackdown is a preview: subscription arbitrage disappeared overnight.

The endgame is unclear. OpenAI is betting on dominance—spend everything to win the market, then monetize. Anthropic is betting on efficiency—reach profitability faster with less risk. Google is betting on integration—bundle AI into existing products. All three could work. All three could fail.

The honest answer: nobody knows what sustainable AI pricing looks like yet. We're all building on shifting sand. The companies burning billions are guessing too.


#What Becomes Valuable

Here's a frame I keep coming back to: SaaS is paying for opinions.

When you buy software, you're not just buying features. You're buying someone else's opinion about how work should flow. Their assumptions about what steps come first, what fields matter, what the happy path looks like. Sometimes those opinions are useful. Sometimes they're constraints that don't fit how you actually work.

AI dissolves those opinions. Instead of adapting to Jira's workflow or Salesforce's data model, you describe what you need and the system adapts to you. The opinionated software layer becomes optional.

So what survives?

Models — The reasoning engines themselves. Anthropic, OpenAI, Google. They're the new primitives. Everything else is built on top.

Data — Not data "sellers" exactly, but data sources. The AI labs are paying real money: Reddit pulled $203 million in data licensing, News Corp got ~$250 million over five years, OpenAI offers $1-5M per corpus. Shutterstock is pivoting from stock photos to "AI services for model training." The value shifted from selling content to humans to licensing it to machines.

Infrastructure — AWS, GCP, Azure, Cloudflare. The compute layer. IaaS doesn't care what runs on it. If anything, AI makes infrastructure more valuable—it's compute-hungry and the demand is only growing.

The middle layer—SaaS tools that wrap workflows around human workers—that's what's compressing.

#The Data Paradox

Here's the tension: AI labs are paying unprecedented amounts for training data. But what happens when sources start locking it down?

Reddit went from free API to $60M/year licensing deals. Stack Overflow made their data exclusive to OpenAI, and users deleted their answers in protest. News sites are blocking AI crawlers. Getty sued for copyright infringement.

The implications cut both ways:

If data stays open: AI gets smarter, models improve, the winners are whoever has the best reasoning engine. Data becomes a commodity.

If data locks down: We get balkanized AI. Models trained on different corpuses. Quality depends on who cut the best licensing deals. Data becomes a moat.

The honest answer: we don't know which world we're heading toward. The legal frameworks haven't caught up. The economic incentives point toward closure. But the technical reality is that models trained on open data are already out there, and you can't un-train them.

What's clear: the companies that control valuable data sources—Reddit, Stack Overflow, news archives, scientific journals—have leverage they didn't have before. Whether they use it to extract rent or build walls, the dynamics are shifting.


#The Weight of It

The $285 billion selloff isn't about Cowork or Agent Teams or any specific tool. It's about the market finally internalizing what builders have known for a year: AI changes the economics of knowledge work.

Software that used to require specialized teams can now be approximated by general-purpose agents. Legal research, financial analysis, code generation—the boundaries are blurring.

Employment for recent CS graduates has declined 8% since 2022 (Oxford Economics). 90% of tech workers now use AI in their jobs (Google). The funnel that used to produce senior engineers is narrowing at the entry point. We're not just changing how software gets built. We're changing who gets to learn how to build it.

"38% of engineering leaders fear juniors will get less hands-on experience in AI-heavy workflows." — CodeConductor

The response isn't to panic. It's to ask: what do I do that AI can't fake?

For me, it's judgment. Taste. The ability to recognize when something is wrong before I can articulate why. The willingness to own outcomes when systems fail.

These aren't skills you learn from a tutorial. They come from years of building things, shipping things, watching things break. They come from caring about craft even when nobody's watching.

AI makes execution cheap. That makes judgment expensive.

Some people will thrive in the orchestration era. They like systems thinking, coordination, judgment calls. Others will struggle. They liked the craft of code, the satisfaction of a clean implementation, the feeling of having made something. Both reactions are valid. Neither is wrong.

The existential crisis isn't about being replaced. It's about becoming someone new.


The tools for this transition are already shipping. In The Multi-Agent Moment, I break down what's available—Claude's Agent Teams vs. Gas Town vs. the community alternatives—and how to navigate the chaos.


If you're feeling the same weight, I'd like to hear how you're navigating it.

Tagged

  • ai
  • systems
  • judgment
Last updated: February 10, 2026
On the trail: Systems & Engineering