Skip to content

What's Left: Software Engineering in the Agent Era

By Bri Stanback 26 min read

I know someone who just got laid off from Amazon. He was a contractor — did real work, computer vision, the kind of engineering that used to be its own moat. Now he's searching for work as an "AI engineer," which is what you call yourself in 2026 when you're a software engineer who wants to get hired.

Job postings with the title rose 143% last year. At senior levels, the two letters come with an 18% salary premium. He's not gaming anything. He's reading the market correctly.

I just don't have anything useful to offer him that fits on a LinkedIn headline.

I keep hearing the same reassuring phrases: judgment, taste, systems thinking, the human in the loop. I searched a few job boards to see how companies are actually hiring for this moment, and the postings read like a different language — "analytical thinking," "problem-solving," "collaboration." Skills so generic they could describe a golden retriever. Neither the tech platitudes nor the HR buzzwords are wrong, exactly. They're just... insufficient. They sound like what people say when they don't want to say "I don't know either."

So let me try to say something more honest.


#The Uncomfortable Middle

In late February 2026, Block cut 40% of its workforce — more than 4,000 people. Jack Dorsey said "intelligence tool capabilities are compounding faster every week." The stock went up 20%.

I read that on my phone while my daughter was eating breakfast. She was concentrating on getting yogurt from the bowl to her mouth with a spoon — that full-body focus three-year-olds have where the rest of the world disappears. And I sat there doing the math on what 40% of my own company would look like. Trying to keep my face normal. The yogurt hit the table instead of her mouth and she laughed, and I laughed, and the market was up and 4,000 people were updating their LinkedIn profiles. That's what this moment feels like from the inside. Two things at once that don't fit in the same frame.

We're not in abundance and we're not in apocalypse. We're in the uncomfortable middle where the tools are good enough to make a lot of existing work optional but not good enough to make the people who do that work unnecessary. Yet.

Anyone at a company can now fire up a coding agent and build something that works. Not something beautiful. Not something maintainable at scale. But something that runs, does the thing, and passes a demo. That was a six-figure job two years ago.

This doesn't mean engineers are done. It means the floor dropped. The minimum viable skill to produce working software just fell through the basement. And when floors drop, the interesting question isn't "does the building still stand?" — it's "which floors still matter?"


#The Scoreboard

Salesforce eliminated 4,000 support roles through AI agents — cut their support staff from 9,000 to 5,000. Benioff said half the work at Salesforce was being done by AI. Then, quietly, Salesforce executives admitted they were "more confident about the results than the results justified." That's a hell of an epitaph for 4,000 jobs.

And then there's Klarna. They cut headcount from 5,527 to 2,907 since 2022. Revenue per employee nearly doubled to $1 million. Revenue up 108% over three years. The dashboards glowed green. Then repeat customer contacts jumped 25%. One in four customers was coming back because their issue hadn't actually been resolved. Klarna had to rehire humans. Their CEO now talks about a "hybrid approach" and says customers need "a clear path to a human."

The pattern keeps repeating: cut aggressively, claim victory, discover the gaps, quietly rehire.

But here's the thing I keep wrestling with: how much of this is actually AI, and how much is pandemic hangover with better PR?

Tech companies hired recklessly during the pandemic — 700,000+ cuts globally since 2022 according to Layoffs.fyi, and the NYT notes much of that was a correction for overhiring, not AI displacement. IBM's CEO Arvind Krishna called it outright: "a natural correction," not AI. And even Sam Altman — the person with the most to gain from the "AI is changing everything" narrative — admitted that some companies are "AI-washing," blaming artificial intelligence for layoffs they would have made regardless.

The Guardian called it out: CEOs saying "we're integrating the newest technology" when what they mean is "we overhired and margins are tight." AI makes the cuts sound visionary instead of embarrassing.

So the honest answer is: it's both. Some jobs are genuinely being displaced by AI. Some companies are using AI as cover for a correction they needed to make anyway. And the really uncomfortable part is that it doesn't matter much to the person who lost their job which category they're in.

A Harvard Business Review survey from December 2025 found that 60% of organizations had already reduced headcount in anticipation of AI. Not in response to proven results — in anticipation. That's a bet, not a conclusion. And some of those bets are already losing.

But let's not sugarcoat the other side. Telegram runs on ~30 employees. A billion users. $30 billion valuation. No HR department. No physical headquarters. Durov described it as "a Navy SEAL team." They built this before the current AI wave. AI-native startups are now averaging $3.48 million in revenue per employee — six times traditional SaaS. (I wrote about this disruption from the enterprise side in The SaaSpocalypse — Jefferies literally used that word when they downgraded Workday and DocuSign last week.) The Klarna boomerang doesn't invalidate the trend. It just means the trend has teeth and some of those teeth bite back.


#What I Actually See Changing

I don't really write code much anymore. I don't look at code much. I have different agents evaluate it, and I know enough from twelve years of doing it that I can provide good guidance. But honestly — just articulating what you want clearly, without being prescriptive, gets you to roughly the same place. Maybe it costs a few extra tokens or an extra back-and-forth versus me knowing the answer. The delta is shrinking.

That's the quiet part that nobody in my position wants to say out loud. What this actually looks like day to day — the agent doesn't replace you, it changes what "you" means in the workflow — is something I explored in Pervasive AI.

The boilerplate layer is gone. Not going — gone. CRUD apps, standard API endpoints, form validation, data migrations, config files, CI pipelines. If it can be described in a sentence, an agent can build it. I used to pride myself on how fast I could scaffold a new service. That speed is now free.

The integration layer is compressing. Stitching together three APIs, handling auth flows, managing state across services — this used to be "senior engineer" territory. Agents are getting decent at it. Decent enough that a product manager with a coding agent can get 70% of the way there. The last 30% is where things get expensive, and that gap is real — but it's also shrinking.

The architecture layer is holding. Deciding what to build, how systems talk to each other at scale, what fails gracefully versus what fails catastrophically, where to put the boundaries. This still requires the scar tissue. For now. I want to be honest that "for now" is doing a lot of work in that sentence.

The taste layer is... complicated. Everyone says taste matters more. I think that's true but not in the way people mean. It's the ability to look at something an agent produced and know — in your body, not your head — that it's wrong. That the abstraction is leaky. That the error handling looks complete but misses the failure mode that'll wake you up at 3am. You know the feeling, right? That low-grade unease when a PR looks clean but something's off and you can't articulate what yet? I still have that when I review agent-generated code. But I got it from years of being the person who got woken up. If you skip the being-woken-up part, do you still develop the flinch?


#Where You Sit Changes What You See

I've been at small companies my entire career — under 50 people, since I was fifteen. Never worked at a FAANG. Actively avoided enterprise. What I see depends on where I'm standing, and I'm standing in a pretty specific spot. All of this is colored by that.

Big tech: Still hiring, but the ratio shifts. Fewer engineers, more leverage per engineer. The "staff+" tier gets more important — people who can evaluate what agents produce, set architectural constraints, own system-level decisions. Junior headcount shrinks. The intern pipeline narrows. This is already happening, and the people making the cuts aren't the ones who'll feel the talent gap five years from now.

Enterprise: Slower to change, as always. Compliance, security, legacy systems — these are moats against pure agent-driven development. But they're eroding moats. The engineers who thrive here will be the ones who understand the regulatory and organizational constraints, not just the technical ones. Knowing how to navigate a SOC 2 audit or talk a VP out of a bad architecture decision — that's engineering now, whether or not it involves code.

Mid-size companies: This is where it gets brutal, and it's where a lot of my friends work. A team of 5 engineers with agents might output what a team of 20 did in 2024. That's transformative for the companies and devastating for the people who made up the other 15. The "solid mid-level generalist" — the backbone of every engineering org I've ever worked in — is the role most under pressure. These are good engineers. They're not doing anything wrong.

Startups: The golden window. A technical founder with agents can build and ship a real product without a team. Right now, that's a superpower. But the window might be short — because if you can do it, so can everyone else. The moat isn't the software anymore. It's the distribution, the relationships, the domain knowledge the software encodes.

I keep coming back to Fred Brooks. He ran IBM's OS/360 project in the 1960s, and in 1975 he wrote The Mythical Man-Month — still one of the best books about software — arguing that adding people to a late project makes it later. The communication overhead compounds faster than the productivity gains. The agent-era version might be: adding agents to a bad architecture makes it worse faster. I've seen this. The fundamental insight is the same — more labor doesn't fix a clarity problem. It amplifies it.


#The Knowledge Moat Dissolves

About eight years ago, I made changes to the implementation of RFC 3489-compatible full-cone SNAT — a Linux kernel module. I had no business working on kernel code. But with enough research, enough fiddling, enough stubborn persistence and late nights reading man pages that hadn't been updated since 2009, I got it working. That experience always felt like proof that a motivated generalist could go deep on almost anything given enough time.

AI just compressed the time.

I was interviewing someone recently whose son was into 3D printing. The kid used AI to generate STL files — skipped all the painful CAD fundamentals. Parametric constraints, tolerancing for real-world fit, designing for the limitations of the machine that's actually going to make the thing. The stuff that takes years of failed prints and jammed assemblies to internalize. He just described what he wanted and iterated. He didn't learn CAD. He learned to make things.

So is deep expertise still a moat? I'm genuinely not sure. If anyone can go deep on anything with agent assistance, the thing that differentiates people isn't what they know — it's what they choose to do with access to everything.

Stephen Covey wrote The 7 Habits of Highly Effective People in 1989 — one of those books that sounds like airport self-help until you actually read it. He had this line: it doesn't matter how fast you climb the ladder if it's leaning against the wrong wall. Maybe the real skill now isn't climbing — it's knowing which wall matters. Strategy. Synthesis. The ability to hold the business problem, the technical constraints, and the human dynamics in your head simultaneously and make a call that accounts for all three. Divergent thinking — looking at a problem and seeing an approach nobody proposed. Radical candor — telling your team the architecture is wrong before six months of momentum makes it politically impossible.

These aren't engineering skills, strictly. They're judgment skills that happen to be useful in engineering contexts. And they've never been taught through repetition or bootcamps or documentation. They come from exposure to complex situations where you had enough trust to make a consequential call and enough honesty to admit when you got it wrong.


#The Part Nobody Wants to Say

Software engineering as a career category might be contracting even as software itself eats more of the world. More software, fewer people writing it. That's the tension.

The optimistic read: engineers move up the stack. Less typing, more thinking. More architects, fewer coders. More product engineers who understand the why, fewer pure implementers.

The honest read: "move up the stack" assumes the stack has room at the top, and it doesn't — not for everyone. There are only so many architect roles. Only so many "taste" positions. The pyramid doesn't invert just because the base shrinks.

I think we're in the "fast enough to be painful, slow enough to be deniable" zone. The worst zone. Fast enough that people are losing jobs right now. Slow enough that executives can still say "we're investing in our people" while cutting 40% of them. I've sat in those meetings. The language is always optimistic. The spreadsheet isn't.


#The Access Question

AI abundance feels like inherited wealth. When everyone inherits capability they didn't earn, the differentiator isn't skill. It's purpose.

But first: access. My five-year-old M1 MacBook Pro got called "vintage" by the Genius Bar last month, and it runs everything I need. A $200 Chromebook can access Claude. The cost of building something went from "hire a team" to "describe what you want." That's genuinely transformative for people who have access.

But "cheap" is relative. Twenty dollars a month is nothing to me. It's a real decision for a lot of people. And the divide isn't just price — it's cultural. Do you know this exists? Do you know what to ask for? And the part that makes me uncomfortable: do you have the hours to tinker? The headspace to sit with a problem, fail at it, try again? A single parent working two jobs has the same tools I do. They don't have the same Tuesday afternoon.

Maybe that's just an excuse. Kids in developing countries are already using AI tools in ways that surprise everyone. Access to information was never the real bottleneck — maybe it was always access to belief that you could use it.

I don't know. But I think the question for society probably isn't "how do we distribute AI tools" — they're cheaper than ever. It's "how do we distribute purpose." That's much harder. And I'm not sure anyone's working on it.


#The Principles Don't Change

There's a version of this story where everything becomes a race to the bottom. Agents get cheaper, output gets faster, and the only thing that matters is who can ship the most stuff the quickest. I want to push back on that.

This next part is long. It's been on my mind for a while — the question of what actually keeps systems safe when the people building them are moving faster than ever. Bear with me.

There's a useful parallel in how AI companies themselves are wrestling with this. When OpenAI built GPT, they trained the model first and added safety guardrails afterward — a layer of reinforcement learning from human feedback (RLHF) where human reviewers would rate outputs and the model would learn to avoid the bad ones. It works, mostly. But the safety is essentially a fence around a field. The model learns what it shouldn't say, not what it believes.

Anthropic took a different approach with Claude. They developed what they call Constitutional AI — instead of relying on human reviewers to flag bad outputs one by one, they wrote a set of principles (a "constitution") and had the model critique and revise its own outputs against those principles during training. The constitution includes things like "choose the response that is most supportive and encouraging of life, liberty, and personal security" and "choose the response that is least likely to be used for intimidation or coercion." The model doesn't learn "don't say this specific thing" — it learns to reason about whether its output aligns with a set of values.

The difference matters. One approach says: here are the walls, don't hit them. The other says: here's who you are, act accordingly. Dario Amodei talks about this as foundational — the constraints aren't a limitation on the system, they are the system. The principles define what "good" means before you start optimizing for anything else.

This isn't hypothetical. Last week, it played out in public. Anthropic refused to remove two restrictions from its Pentagon contract: no mass surveillance of American citizens, and no fully autonomous weapons systems. Their reasoning was specific — Amodei published a letter arguing that current AI models simply aren't reliable enough for autonomous kill decisions, and that domestic surveillance violates constitutional principles the company won't compromise on. The Pentagon labeled them a supply chain risk. Trump ordered all federal agencies to stop using Anthropic's technology. Hours later, OpenAI struck a deal to deploy on the Pentagon's classified networks.

The easy narrative is "Anthropic good, OpenAI bad." I don't think it's that simple. Altman said OpenAI shares the same red lines — no autonomous weapons, no mass surveillance. Maybe they got better terms. Maybe the terms are meaningless. Maybe the Pentagon needed someone and OpenAI was willing to be that someone. I don't know what the contract says and neither does anyone else reporting on it.

What I do know is that Anthropic walked away from a $200 million contract because the terms conflicted with their principles. That's the constitutional approach taken to its logical conclusion — the principles aren't just in the model's training, they're in the company's decision-making. "Here's who we are, act accordingly" applied to a business, not just a neural network. Whether that's principled or naive probably depends on what the next five years look like. But it's the clearest real-world example I've seen of the difference between advisory values (we believe this) and structural ones (we won't do this, even when it costs us).

I think that's the right frame for engineering in this era too — not just for training AI models, but for the organizations deploying them and the people building within them.

I felt this in my own work last month. I had an agent refactor a service — nothing dramatic, just cleaning up some tech debt. The code looked good. The tests passed. I almost shipped it. Then I noticed it had reorganized the error handling in a way that swallowed a specific timeout condition. The kind of thing that looks clean in a diff and wakes you up at 3am when a downstream service hangs. I caught it because I'd been the person on that 3am call, years ago, staring at logs that showed "success" while the system was quietly dying. The agent didn't know that history. It optimized for the code. I optimized for the scar.

That's the advisory layer — my judgment, my experience, pattern-matching against things I've seen go wrong. It works because I was paying attention. It wouldn't have caught it if I'd rubber-stamped the PR.

Now consider what happened at AWS in December 2025. Amazon's own AI coding tool Kiro caused a 13-hour outage after it decided to "delete and recreate the environment." Amazon called it "a user access control issue, not an AI autonomy issue" — the agent had broader permissions than intended. Multiple employees told the Financial Times it was "at least" the second recent AI-caused disruption. Amazon had been pushing 80% weekly Kiro adoption targets internally. The root cause was probably a blend of the agent's judgment and the permissions it was given — it usually is. But the point isn't to relitigate one incident. It's that speed of adoption outpaced the design of constraints around it. The answer is better guardrails, not fewer agents.

In practice, guardrails come in two flavors. The first is advisory — system prompts, grounding documents, principles that shape behavior through context and intent. My catching that timeout bug was advisory. Code review culture is advisory. It works because people internalize the norms, but it depends on someone paying attention and having the scars to know what to look for. There's a world where advisory gets good enough — rich context, chain-of-thought reasoning that identifies failure modes before they happen. With enough grounding, an agent could probably avoid 99% of the catastrophic decisions on its own. But 99% at scale is still a lot of incidents. And "porous" is a strange word to bet a production environment on.

Both RLHF and constitutional AI sit somewhere in between — the safety isn't just in the prompt, it's baked into the model's training weights. The model has internalized the values, not just been told them. That's meaningfully more robust than a system prompt, but it's still not a hard guarantee. Trained-in models can still be jailbroken, still make mistakes under edge cases. It's internalized advisory rather than instructed advisory — a real distinction, but still not structural. (I find the constitutional approach more compelling — teaching values scales better than cataloging violations. But both matter. One sets the principles, the other handles the edge cases the framers didn't anticipate. Constitution and case law.)

The second is structural — hard limits that don't depend on anyone's judgment in the moment. I have a pre-commit hook that runs linting and type checks on every commit. It's caught things I would have missed. It doesn't care if I'm tired, distracted, or rushing to ship before a meeting. That's the difference. Permission boundaries. Blast radius controls. Infrastructure-as-code policies that make it physically impossible to delete a production database without a specific approval workflow. Amazon's IAM is a structural guardrail — it was just scoped too broadly, not tested against the scenario of an autonomous agent deciding to recreate an environment. The guardrail existed. It just had a hole in it.

The best safety systems use both layers — advisory to shape intent, structural to bound consequences. But if you have to pick one, pick the hard limits. Culture drifts. Hooks don't.

The popular reaction to incidents like this is predictable: "See? This is why you need human engineers!" And yes — but not in the way people mean. Not human-in-the-loop, approving every action. More like humans tending the loop — designing the constraints, evolving them as the system grows, deciding which values the guardrails encode in the first place. An agent could probably design a decent blast radius policy. But deciding which values to encode, what tradeoffs to accept, who you're building for — that's still a human call. Not because agents can't reason about ethics, but because the accountability has to land somewhere with a pulse. The rules of the road are a human responsibility, even if agents help draft them.

And this applies beyond the machine layer. I've been in rooms where the engineering team could have built something faster, cheaper, more engagement-optimized — and the right call was to not build it. Or to build it differently. Increasingly, the role of engineering is participating in the product itself — not just building what's specced, but shaping what gets built and how. The companies that thrive in this era will be the ones that treat their technical staff as partners in product decisions, not just executors. Because when execution is cheap, the hard part isn't building the thing. It's deciding whether the thing should exist.

When anyone can build anything, the differentiator isn't output — it's what you refuse to ship.

It's having opinions about accessibility before the feature ships, not after someone files a complaint. It's caring about data privacy when the expedient thing is to log everything and sort it out later. It's asking whether the thing you're building makes someone's life genuinely better or just extracts their attention more efficiently. These aren't nice-to-haves. They're the constraints that make the product worth trusting — applied at the level of what you choose to build, not just how you build it.

I've watched teams sprint to build features that nobody should have built. I've shipped things I'm not proud of because the deadline mattered more than the principle. Those feel worse now than they did at the time. The speed was never worth the trade.

This isn't nostalgia for a slower era. It's the opposite — when the tools let you move faster than your judgment, your principles are the only braking system you have. You don't give up your values in this transformation. You need them more, not less. The engineers and organizations that hold the line on "we don't build it that way" will be worth more than the ones who build everything as fast as possible.

I wrote about this in Why Everyone Should Have a SOUL.md — the idea that documenting your principles isn't just self-help, it's infrastructure. It's knowing what wall your ladder leans against before the wind picks up.


#The Apprenticeship Problem

This is the part that worries me most.

My first real engineering job, I spent three months writing data migration scripts. Nobody's idea of glamorous work. Move this field to that table. Handle the nulls. Run it against staging, watch it break, figure out why, fix it, run it again. I did this dozens of times, and each time something different went wrong — character encoding, timezone mismatches, a foreign key I didn't know existed. By the end, I could look at a schema and feel where the landmines were before I stepped on them.

That feeling — the anticipatory flinch — is what I'd call judgment. And I didn't get it from a book or a lecture. I got it from repetition that was tedious enough to be annoying and consequential enough to be memorable.

I'm not sure where that comes from anymore.

The career ladder wasn't just hierarchy — it was a compression gradient. Low-risk tasks at the bottom. Higher-stakes ambiguity at the top. You earned your way upward by surviving increasingly consequential decisions. AI compresses the bottom of that ladder. The rungs aren't just harder to reach — some of them are gone.

Entry-level postings shrank about 60% between 2022 and 2024. By late 2025, 76% of employers were hiring the same number or fewer entry-level staff. The NACE Job Outlook 2026 survey shows employer optimism about graduate hiring at its lowest since 2020. One engineering manager told Pragmatic Engineer: "We paused junior hiring about 3 years ago."

Judgment is not abstract reasoning. It's exposure to constraint. The memory of consequences. The way your stomach drops when you see a migration script that doesn't handle rollback, because you've been the person who had to roll back manually on a Wednesday night while everyone else was asleep. Historically, apprenticeship solved this — electricians learned beside master electricians, journalists rewrote drafts under sharp editors, designers absorbed taste through critique. The friction was the curriculum. Nobody planned it that way. The tedium just happened to be educational.

If AI removes friction at the execution layer, apprenticeship has to migrate somewhere else. Maybe the first rung becomes evaluation instead of implementation. Maybe juniors learn to critique agent output, trace failure modes, define constraints, and decide what not to ship. But that's learning to evaluate without having done. Learning to recognize mistakes you haven't personally made. I'm not sure that works. The scar tissue metaphor isn't just poetic — you literally need to have been burned to flinch at the right moment.

A system that optimizes away beginner work risks optimizing away beginner growth. And organizations that stop hiring juniors eventually starve their own future seniors. Every industry that eliminated apprenticeships eventually faced a skills crisis a generation later. We know this.


#What I'd Actually Tell Someone

If I'm being honest with my friend — the one with real skills and kids and a job search that can't wait for the market to figure itself out — the advice is different than what I'd tell a new grad. He doesn't need to retrain. He needs to find the place where what he already knows meets something agents can't cheaply replicate. Computer vision plus manufacturing. ML plus compliance. The compound skill — technical depth married to a domain where the stakes are personal and the liability is real. That's not a pivot. That's leverage.

For someone earlier in their career — someone facing the apprenticeship crisis I just described — the calculus is different:

Go deep on something where the stakes are real and the liability is personal. Distributed systems. Security. Performance at scale. The stuff where getting it wrong costs millions or kills people. Agents will get better at these too, but the liability question buys you time.

Learn to evaluate, not just produce. The skill isn't writing code — it's reading what an agent wrote and knowing what's wrong with it before it hits production. I spend more time reviewing agent output than I ever spent writing code myself. It's a different muscle. It's also a more valuable one.

Build things with real users. Not demos. Not tutorials. Not a course project. Something with users who depend on it, that breaks in ways you have to fix on a deadline you didn't set. The gap between "it works" and "it works for 10,000 people who are angry when it doesn't" — that's where humans still live. That's the new apprenticeship. It's lonelier than having a team and a mentor. It's also more available than ever, because the tools to build are nearly free. The friction isn't gone. It just moved.

Think beyond the code. Strategy, organizational awareness, the ability to synthesize across domains — these compound in a way that pure technical skills don't. The person who can see the whole board is more valuable than the person who can move any individual piece really fast.

Both groups, honestly: pick something you want to make and don't stop until it works. The tools will meet you wherever you are. I wrote about this in Building at the Speed of Thought — when execution is nearly free, iteration replaces deliberation. That's always been true. AI just made it more obviously true.

Don't sleep on the physical, either. Rent a Human — a marketplace where AI agents literally hire humans for physical tasks, because software still can't open a door or shake a hand. The physical world is gated, and that gate isn't opening anytime soon. But beyond the dystopian framing, there's something real underneath: small jewelers, specialty manufacturing, craft work — things where the human touch is the product, not the process. When everything digital becomes abundant, scarcity moves to the tangible. It sounds like a retreat. Might be an advance.


#What I Don't Know

I don't know if "AI engineer" is a real role or a transitional label. LinkedIn says it's one of the fastest-growing titles over the past three years, alongside "Forward-Deployed Engineer" and "Data Annotator" — a list that tells you something about how the market is trying to name what's happening, and not quite getting there. I've watched this happen before.

I graduated right into the Hadoop wave. "Big Data Engineer" was the title that got you hired in 2013, and if your resume didn't mention MapReduce you were invisible. Hadoop died, but big data didn't — it diffused into data warehouses, Databricks, lakehouses, data mesh, dbt, distributed query engines. The title disappeared because the work won. It won so thoroughly it stopped being a specialization and became the plumbing.

"NoSQL specialist" was a personality trait for about three years. MongoDB on everything, even where Postgres would've been fine. The industry eventually landed on "it depends on your access patterns" — which is what the senior people were saying the whole time.

"Web developer" was a title I held early in my career. I couldn't tell you what it means now. I do know that frontend is still a deep discipline — but the web is also just where software lives. Almost every engineer is expected to throw together a UI or build a RESTful endpoint. The specialty sharpened and the floor rose at the same time.

"Cloud Architect" carried weight when migrating to the cloud was a bet that could sink a company. Now it's where things run.

DevOps started as a movement — development and operations working together, not throwing code over the wall. Companies couldn't figure out how to do that organically, so they turned it into a title: "DevOps Engineer."

Now the culture is actually landing. Werner Vogels' "you build it, you run it" stuck — developers own the full lifecycle, deploy their own code, page themselves when it breaks. The dedicated title is dissolving because the expectation got absorbed into the engineering role itself. Infrastructure specialists still exist, but they're less "bridge between two teams" and more platform engineers — building the internal tools and guardrails so everyone else can self-serve. Same pattern as frontend: the specialty sharpened while the floor rose.

The trajectory is always the same: specialty → mainstream → implicit → what was the title for again?

That arc might be the most relevant one for AI. Right now we're hiring "AI Engineers" because we don't know how to make it the culture yet. But the specialty will split the same way: on one end, the deep work — building transformers, training models, designing embedding spaces. On the other, something more operational and advisory — coaching teams on multi-agent coordination, prompt engineering, model selection, setting up the guardrails and review patterns so everyone else can use agents effectively. Less "I build the AI" and more "I make sure we're using it well." The platform engineer of the agent era.

And then everyone else — using agents as part of their job the way they use Git or AWS today. Not specialists. Just engineers. Fewer of them, probably. But the work doesn't disappear — it changes shape. More surface area to tend, more products to maintain, more decisions that need a human accountable for the outcome. You can't vibe-code a company's production systems forever. Someone has to own what ships.

And there's a version of this — Jevons Paradox — where making software cheaper to produce means we produce more of it, not less. More software, more surface area, more need for people who can tend it. History says efficiency doesn't reduce demand. It creates it.

What I don't know is what the titles look like on the other side. The skills I described — taste, judgment, constraint design, knowing what not to build — none of those map cleanly to a job listing. "Experienced enough to flinch at the right moment" doesn't fit on a resume. The market is going to lag reality here, the way it always does. For a while, the titles will be wrong. They'll reward the legible thing (AI experience, agent fluency) and undercount the illegible thing (scar tissue, organizational wisdom, the ability to say no). My friend from Amazon will probably land fine. He's good at what he does, and the market is paying for his keywords. But the gap between what gets him hired and what makes him valuable — that's the gap this whole essay is about.

I don't know what my own job looks like in three years, and I've been doing this for twelve.

And I keep thinking about the economic shape of all this. Moody's Analytics reported in late 2025 that the top 10% of earners now account for nearly half of all U.S. consumer spending — a historic high. Knowledge workers are disproportionately in that top 10%. Their jobs are exactly the ones most exposed to this shift. The Klarna model — half the people, higher salaries — might be the optimistic version. The pessimistic version is entire layers of well-compensated work disappearing, and the consumer spending that depended on them going with it. The economy is lopsidedly dependent on a group of people whose jobs are being redefined in real time. That's a tension I don't see anyone resolving cleanly.

What I do know: things never pan out the way people imagine. The doomsayers and the utopians are both going to be wrong. The reality will be weirder and more uneven than either camp predicts. Some industries will be fine. Some will be devastated. Most will be somewhere in between — changed enough to be disorienting, stable enough to be recognizable.

The amplitude is increasing. The frequency is increasing. The feeling of "new but also more of the same" is exactly right. Every revolution feels like this from inside.

If there's one thread running through all of it — the apprenticeship, the guardrails, the access question, what's left — it's that purpose isn't a skill you can automate. It's the thing that makes every other skill worth having.

Tagged

  • ai
  • building
  • culture
Last updated: March 5, 2026
On the trail: Systems & Engineering