Skip to content

Your Infrastructure Spec Already Moved Into Your Code

By Bri Stanback 11 min read

This comes from running Cloudflare Workers in production, watching our team's AI agents try to reason about wrangler configs, and then watching Evan You announce Void.

Yesterday, Evan You stood in front of a camera, announced that Vite+ is open source under MIT, and then did the "one more thing." Void. A deployment platform where your code is your infrastructure. void deploy scans your source, detects what you use — database, KV, queues, cron, auth, AI inference — and provisions it. No config files. No dashboard clicks. Built on Cloudflare.

The reaction was predictable. Half the room said finally. The other half said I'll never trust magic provisioning in production.

Both are right. And neither is asking the real question.

#The Spectrum

Every infrastructure tool exists on a single axis: how far is the spec from your application code?

ToolDistanceWhat you write
Terraform / PulumiFar — separate codebaseHCL modules or TypeScript stacks that mirror your architecture in a parallel universe
Kubernetes / CrossplaneMedium-far — declarative manifestsYAML custom resources reconciled by a control loop. Infra and app deploy from the same plane, but the spec is still YAML and the state model is still "desired vs. actual" with drift
AWS CDK / SSTMedium — adjacent codeTypeScript constructs that describe infrastructure, compiled to CloudFormation
Wrangler / `wrangler.jsonc`Close — config next to codeA JSON file that tells the platform what bindings your Worker needs
EncoreCloser — inferred from codenew SQLDatabase("orders") in your app code provisions RDS automatically
VoidClosest — code is infraImport a module, deploy. The platform figures out the rest

The direction is unambiguous. The debate is about how fast to move.

#Why This Is Happening Now

Two forces are colliding.

AI agents need coupled context. To be fair: LLMs can generate decent Terraform. They can get most of the way there. And you can give an agent access to CLI tools — gcloud, aws cli — to inspect live configuration and validate it against the spec. It's not hopeless.

But here's the thing: your application code is coupled to your infrastructure. Your API needs that database, that queue, that KV store. The code won't work without them. Yet the spec that describes that infrastructure lives in a different file, maybe a different repo, written in a different language — HCL, YAML, CloudFormation JSON — with no compile-time check that the two agree. The coupling is real but the contract is informal. You don't find out they've drifted apart until deploy time, or worse, runtime.

YAML is the worst offender. It's not strongly typed. Terraform will catch syntax errors at plan before anything touches production — that part works. The dangerous failures are the ones that parse fine but mean the wrong thing. A misspelled key name is valid YAML. A missing environment variable is a valid ConfigMap. It deploys cleanly and breaks at runtime. No linter catches it because no linter knows what the key should be. That's why tools like Pulumi moved to real programming languages — TypeScript, Go, and Python (which is technically typed if you squint and believe hard enough) — where you get type checking, IDE support, and at least some compile-time guarantees. That was already the right instinct.

Pulumi got halfway there — real languages, real types. But you're still writing infrastructure about your application in a separate place. Encore and Void take the next step: infrastructure inside your application. new SQLDatabase("users") or import { KV } from "void" — the coupling becomes explicit. One codebase, one type system, one dependency graph. An agent isn't reasoning about two separate descriptions of the same system. It's reading the system itself. That's not a developer experience story. That's a leverage story.

Developer time is too expensive for a parallel mental model. A Terraform setup for a typical backend — database, Pub/Sub, cron — can run to hundreds of lines of HCL across multiple files. Someone has to understand, review, and update that code every time the application changes. For a five-person team without a dedicated DevOps engineer, that overhead competes directly with shipping product.

When your AI agent writes a feature that needs a new queue, and the queue requires a separate infrastructure PR with its own review cycle, you've created a serial bottleneck in what should be a parallel workflow. The infrastructure layer becomes a tax on the thing you actually care about: the product.

#The Layers Nobody Draws on the Same Whiteboard

Before I get further into this, I should name the thing I'm glossing over. That spectrum table above is really about provisioning — who creates the cloud resources. But provisioning is one of at least four distinct layers that are all collapsing simultaneously:

LayerWhat it doesTraditional toolsWhere it's heading
ProvisioningCreate cloud resources (databases, queues, DNS)Terraform, Pulumi, CloudFormationInferred from code (Encore, Void)
DeliveryGet code from repo to running stateArgoCD, Flux, SpinnakerCollapsed into deploy commands (void deploy, sst deploy)
PipelinesBuild, test, validateGitHub Actions, CircleCI, JenkinsThe triggers and event orchestration still matter — but the work inside the workflow is shrinking as toolchains (Vite+) and platforms absorb it
Developer PlatformService catalog, docs, onboardingBackstage, Port, CortexEither absorbed into the deploy platform or generated from code

Each of these layers has its own ecosystem, its own debates, its own conferences. And each one is independently moving in the same direction: closer to the application code, further from standalone config.

At large companies, these layers get split across teams — a platform team for Terraform, SRE for ArgoCD, a build team for CI. I've never worked at a company like that. I've been at a small engineering team — ranging from just me to maybe a dozen — for twelve years. Which means I'm all three of those teams, and I feel the coordination tax every time I ship: I changed my application and now I have to update three separate systems to get it running.

The collapse isn't just about any single layer. It's about the gap between layers — the coordination cost of keeping provisioning, delivery, pipelines, and platform in sync with each other and with the application. Every time you add a queue to your app, you're touching Terraform and ArgoCD manifests and GitHub Actions workflows and maybe updating Backstage. The tools that are winning are the ones that eliminate the gap, not the ones that make any individual layer easier.

Void's bet is that all four layers collapse into one command — but only within Cloudflare's boundary. If your world fits on Workers, KV, D1, and Queues, you genuinely never touch another tool. The moment you need something outside that boundary — AWS RDS, GCP Pub/Sub, a managed Postgres that isn't D1 — you're back to Terraform for that piece, and the clean collapse gets messy. Encore collapses provisioning and delivery but deploys to your own AWS/GCP. SST collapses provisioning and pipelines on AWS. Each tool picks a different set of layers to absorb, and a different set of constraints to accept.

With that framing, the GitOps problem gets clearer.

#The GitOps Paradox

GitOps is specifically about the delivery layer — but it has a problem that bleeds into everything above and below it.

GitOps says: Git is the source of truth. You declare your desired state in YAML, commit it, and a reconciliation agent (ArgoCD, Flux) continuously syncs the cluster to match. Elegant in theory. In practice, it breaks in ways that look like success.

The failure mode is always the same: someone commits a change, the sync goes green, health checks pass, and production is broken. A typo in a ConfigMap key name. A missing environment variable. A canary annotation that got forgotten. The system synced perfectly to the wrong state. ArgoCD doesn't know the difference between "DATABASE_CONNECTION" and "DATABASE_CONECTION" — it just reconciles what you told it.

The deeper problem is the model itself. GitOps treats Git as the authoritative truth about what your infrastructure should be. But Git is an archive — it captures what someone intended at commit time. The actual truth is what's running in the cluster right now. And there's always a gap between the two. Configuration drift isn't a bug in GitOps. It's the fundamental physics: desired state and actual state are maintained in different systems, synchronized by a polling loop. The gap is structural.

For a human DevOps engineer, that gap is manageable. You build validation webhooks, OPA policies, canary rollouts, manual sync gates. You add layers of defense between the commit and the cluster.

For an AI agent? The gap is workable but expensive. An agent can verify what it deployed — it can shell out to kubectl, gcloud, aws cli, read the cluster state, compare running config against intended config. I know because we have agents doing exactly that. It works.

But think about what that workflow actually is: commit YAML to a repo, wait for ArgoCD to sync, poll the cluster through CLI tools, parse the output, compare it against the original intent, flag discrepancies. It's a round trip through three separate systems to verify something that could have been a type error at compile time. The semantic check that GitOps defers to runtime is a check that infrastructure-as-code could catch at write time.

This is why the spectrum is collapsing toward code. Not because GitOps is wrong — it was a genuine advance over SSH-and-pray. But because the gap between "what I declared" and "what's actually running" becomes more dangerous as agents do more of the declaring.

#The Lock-In Trap

OK — so the spectrum is collapsing. But toward what? Here's where I get skeptical.

The more implicit the tool, the faster the onramp — and the less obvious the exit ramp. Void's zero-config is real. But what happens when you need to leave?

Evan's been upfront about this: you don't bring your own Cloudflare account. Resources are provisioned and managed by Void, not in your CF dashboard. The lock-in enables the magic DX — and the eject means migrating your data off their managed infrastructure, not just swapping imports. That's the Vercel/Heroku model: your code, their infra. The lock-in isn't at the SDK layer. It's the data.

Compare Encore, which deploys to your AWS or GCP account. You own the resources from day one. encore build docker gives you a portable image. The exit is real because you were never on someone else's platform to begin with.

If lock-in scares you, Void isn't the tool — and Evan seems fine saying that. The bet is that the DX is worth the coupling, and for a lot of teams shipping fast on Cloudflare's primitives, it probably is.

Encore is more explicit about this — it's open source, deploys to your own AWS/GCP account, and encore build docker gives you a portable image. But you're still using Encore's declarative patterns. The thin wrapper is still a wrapper.

The spectrum still has a rough symmetry: explicitness is proportional to portability. Terraform is painful to write and straightforward to migrate. Void is effortless to write and requires real effort to migrate — though "real effort" is closer to "a week" than "impossible."

The right answer isn't maximum implicitness. It's ergonomic explicitness — keep the spec in code you own, make it simple enough that nobody reaches for the escape hatch.

I'll tell you what we actually run. It's three tiers, each doing what it's good at:

Pulumi for the foundation. A central infrastructure repo with thirty-one reusable TypeScript modules — DNS zones, data warehouse setup, Cloud SQL, logging and monitoring middleware, Cloudflare zone configs. The stuff that doesn't change often and isn't tied to any single deploy cycle. It's genuinely good for this. You define it once, version it, and forget about it until you need to add a new zone or resize a database.

Pulumi colocated for service-level infra. Some of our app repos have their own Pulumi definitions for things tightly coupled to the service — uptime checks, synthetic monitoring, alerting policies. These live next to the code they monitor because they should change when the service changes.

GitHub Actions + Wrangler for deploys. Thirty-three reusable workflow templates across the org. The actual ship-it cycle: build, test, deploy via wrangler. This is where we're consolidating — moving toward GH Actions as the orchestration layer and wrangler.jsonc as the deploy config.

We also still have Kubernetes and ArgoCD for some of our internal APIs and services. And a Jenkins server that nobody wants to talk about. We're moving away from both, but they're still running — because that's what a twelve-year-old stack looks like.

It's not elegant. But it's ours — every module is TypeScript we wrote, every workflow is a file we can read, and when something breaks at 3am I can trace the problem from the application through the pipeline through the infrastructure to the cloud resource. An agent can read any of it because it's all in repos we control.

Is it more work than void deploy? Obviously. But I know where the bodies are buried. And when a platform changes its pricing, gets acquired, or deprecates an API — I'm not waiting for someone else's eject button.

#What Void Actually Showed Us

Void's real contribution isn't the product. It's the proof that the infrastructure-as-separate-codebase model is ending.

When the creator of Vue and Vite — someone who understands developer tooling at a level most of us can only squint at — looks at the landscape and says "the spec should live in the code," that's a directional signal. Not because Evan You is always right. Because the economics force the same conclusion from every angle:

The teams that win the next three years aren't the ones who picked the right tool. They're the ones whose infrastructure is most legible to their agents. Context-driven infrastructure isn't a developer ergonomics story. It's a leverage story. The same leverage story that's playing out in every layer of the stack right now.

#The Pattern

This is the same collapse that happened to build tooling a decade ago. Remember when you needed separate tools for compilation, bundling, minification, source maps, and hot reload? Then webpack absorbed them. Then Vite absorbed webpack's job and did it faster. Now Vite+ absorbs linting, formatting, and testing into a single binary. Each generation, the spec moves closer to the code and the boundary between "your application" and "the infrastructure it runs on" gets thinner.

Void is Evan You betting that deployment is the next thing to collapse into the application layer.

He's probably right about the direction. The question is whether you want to be on the platform that does the collapsing, or whether you'd rather steal the pattern and keep the keys.

I know which one I'd pick. But I've been burned by magic before. Twelve years of burned. The thing about scrap wood is you learn which glue holds and which glue looks like it holds.

Tagged

  • architecture
  • ai
  • systems
  • tools
On the trail: EngineeringAgentic Engineering