Skip to content

Defaults Are Decisions

By Bri Stanback 10 min read

This comes from building SchoolScope, a school data product where my own scoring model reproduced the demographic patterns I was trying to help parents see past — and from twelve years watching a codebase's early decisions become invisible.

I built a school data product for parents. The idea was simple: take publicly available test score data and make it useful — help families understand what a school's numbers actually mean, not just whether they're high or low.

The data broke along demographic lines almost immediately.

Not in a way that surprised me — I knew the research. But there's a difference between knowing that test scores correlate with income and race, and seeing it in your own product, in the thing you built, where the default sort order puts certain schools at the top and others at the bottom. I hadn't decided to rank anything by demographics. I'd just used the available data, applied a reasonable scoring model, and the defaults did the rest.

I sat with that for a while. Not the policy question — the feeling. The specific discomfort of building something that does exactly what you told it to, and realizing what you told it carries more than you thought. I wasn't being careless. I was being default. And the defaults weren't mine — they were the system's, inherited through the data, and I'd passed them through without examining them.

That's the thing about defaults. They feel neutral. They're not.


I've always believed wealth is skewed toward preexisting capital. That's not a political statement — it's compound interest. Capital accrues to capital. The AI version is the same physics: context accrues to context. If you already have clean data, strong documentation, and engineering culture, AI makes you better faster — which generates more data, which improves your context, which accelerates the flywheel. If you don't have those things, AI doesn't help you catch up. It widens the gap at the rate the flywheel spins.

This plays out at every level. The individual who already knows how to prompt gets more from AI, learns faster, becomes more valuable. The one who doesn't falls behind at the rate the other one accelerates. The company with clean architecture deploys agents effectively. The one with messy foundations can't even start. The country with digital infrastructure benefits. The one without it doesn't just miss out — the gap compounds.

Compounding systems amplify their starting conditions. That's true of capital markets. It's true of codebases. And it's true of who AI serves.


#The Conversation We're Having (and the One We're Not)

When people talk about inclusivity and AI, the conversation usually falls into one of three lanes.

Bias in training data — facial recognition with order-of-magnitude error rate differences across skin tones, language models encoding stereotypes. Measurable, auditable, where the most obvious harm lives.

DEI governanceresponsible AI frameworks, audit checklists, review boards. Necessary infrastructure, but it frames inclusivity as a compliance layer rather than a design question.

Inclusive design — Kat Holmes' Mismatch (2018) is the closest to what I'm trying to say: exclusion isn't a failure of the system, it's a feature nobody chose consciously. But Holmes was talking about doorknobs and interfaces. A poorly designed doorknob excludes one person at a time. A system prompt excludes at scale.

What I'm not seeing is the framing I keep circling: inclusivity as architectural debt. Not bias as a bug. Not diversity as a box. But defaults as structure — reasonable decisions that compound into a system that serves some people well and others not at all, the same way technical debt compounds into a codebase that works but nobody can change.

That's the conversation I want to have.


This isn't abstract to me. When I build a context engineering stack, I'm deciding whose knowledge gets surfaced. When my team writes system prompts, we're deciding what "helpful" sounds like, what "professional" means, what kind of questions are worth answering and which get deflected. When we choose training data or select which documents go into the RAG pipeline, we're deciding whose experience counts as ground truth.

None of these feel like inclusivity decisions. They feel like engineering decisions. That's what makes them powerful.


#The Architecture of Who Gets Served

I've spent twelve years watching a codebase grow. One of the things you learn over that much time is that early decisions become invisible. The database schema from 2014 still shapes what's easy and what's hard to build. The API conventions from 2016 still determine how new engineers think about the product. Nobody chose these as permanent constraints. They were reasonable defaults that calcified.

Inclusivity works the same way in systems. The early decisions — whose use cases get prioritized, whose language patterns train the model, whose workflows define "normal" — become the bedrock. Not because anyone decided to exclude. Because someone decided to include specific things, and everything else became an edge case.

AI amplifies this. A human team making exclusionary defaults affects the people who use their product. An AI system trained on those defaults reproduces them at scale, across every interaction, without the human friction that might cause someone to pause and say "wait, this doesn't feel right."

I keep coming back to something from my design physics essay: constraints shape what's possible. In Q2 — humans designing for agents — the constraints you set determine what the agent can do. But they also determine who the agent serves well. A tool schema that assumes English. A system prompt that defines helpfulness through a Western professional lens. A context window that prioritizes recent documentation over historical context. Each is a constraint. Each is a decision about who belongs inside the system's assumptions.

And like all architectural decisions, inclusivity debt compounds when nobody's paying attention. A team launches a product with reasonable assumptions about their users. Each feature optimizes for that user base. Each A/B test reinforces that demographic's behavior patterns. Over time, the product becomes deeply, invisibly optimized for a specific kind of person — not because anyone decided to exclude, but because every small decision was reasonable and data-driven and for the users they had. The users they didn't have never generated the data that would have changed the defaults.


#Where I've Felt This

Early in my career, I'd join a codebase and immediately absorb its conventions — naming patterns, architectural opinions, the unspoken rules about where things go. I was good at it. People thought I ramped up fast. What I was actually doing was erasing my own instincts and replacing them with whatever the system already believed.

My default mode is disappearance — mold yourself to fit what's already there, read the defaults before anyone tells you they exist. In code, that meant I didn't push back on an API design I found confusing, because I assumed the confusion was mine. Didn't suggest a different data model, because the existing one had momentum and who was I to question it.

Years later, as an architect, I started to see those same unquestioned defaults from the other side. The API I'd found confusing was confusing — but it had been designed by people who shared enough context that the confusion was invisible to them. The data model I hadn't questioned encoded assumptions about our users that hadn't been true for years. My instinct to adapt had been useful for me. It had been terrible for the system. The system needed someone to say this doesn't work for people like me, and instead I'd said I'll figure it out.

That's the personal version of what happens at scale. When a team builds a product, they build it for the people in the room. The defaults reflect their assumptions, their contexts, their definition of normal. When someone outside that circle uses the product, they adapt — learn the system's language instead of the system learning theirs. And the system never gets the feedback that would change it, because the people who don't fit have already learned to stop saying so.

AI makes this more acute, not less. A human on the other end might pick up on context clues and adjust. An AI agent applies the same defaults to everyone, every time, at scale. That's consistency. It's also erasure, if the defaults weren't built for you.


#The Hiring Loop

Nowhere does this compound faster than hiring.

A job description is a system prompt for humans. It encodes what the team already looks like and calls it "requirements." Ten years of experience — that filters out career changers, parents who took time off, anyone whose path wasn't linear. "Culture fit" — that's literally encoding existing culture as a hiring gate, which means you select for people who match the defaults you already have.

I've watched this loop from the inside for twelve years. Early on, I sat in interviews where "strong communicator" meant someone who sounded like the engineers already on the team. I've seen referral pipelines that worked great — if you already knew people who looked like the people we'd already hired. It's not malicious. It's not even conscious most of the time. You hire people who remind you of people who've succeeded before. You calibrate "good interview" against the communication style of your existing team. You promote the signals that correlated with success in your system, not in the world. Each decision is reasonable. The compound effect is a team that looks increasingly like itself.

Now add AI to the loop.

AI résumé screening tools are trained on historical hiring data — which means they learn who you've already hired and pattern-match for more of the same. Amazon famously scrapped an AI recruiting tool because it penalized résumés containing the word "women's" — it had learned from a decade of predominantly male hires. The tool was working correctly. The defaults were the problem.

AI interview analysis has defaults about what "confident" and "articulate" sound like. Whose communication style is the baseline? AI-generated job descriptions reproduce the language patterns of existing postings — and research consistently shows those patterns are gendered: "ninja," "rockstar," "aggressive" skew male applicants; "collaborative," "supportive" skew female.

The hiring pipeline is the purest example of defaults-as-architecture because it's self-reinforcing. Homogeneous team → defaults reflect that team → hiring filters for people who match → more homogeneous team → defaults calcify further. Each cycle is data-driven. Each cycle is reasonable. The exclusion isn't a decision anyone made. It's the decision nobody examined.


#Who Operates, Who Gets Operated On

There's one more layer I keep thinking about: who's using AI, and who's having AI used on them.

Right now, the people operating AI agents — prompting, building, steering — are overwhelmingly technical, English-speaking, already-privileged knowledge workers. Prompting skill correlates strongly with existing education, language fluency, and access. If AI is the new literacy, the most capable versions of it are still largely behind paywalls. A class marker is forming. "I shipped this with Claude Code" is becoming what "I can code" was in 2010.

Meanwhile, AI augments the people who already have leverage — architects, senior engineers, people with context and judgment — and displaces people who were doing execution. The displacement concentrates in roles that are often more diverse: customer support, data entry, junior positions that were historically the on-ramp.

So there's a split forming: some people use AI, and other people have AI used on them. Hiring screens. Credit decisions. Content moderation. Insurance assessments. The operators set the defaults. The operated-on live inside them.

That split maps onto existing power lines with uncomfortable precision. And it means the defaults-as-decisions problem isn't just about building better systems. It's about who gets to build them at all.


#What Architecture Can Do

The places that seem to get this less wrong share a pattern: they treat the absence of feedback as a signal, not as silence.

The deepest problem with default-driven exclusion is that the people being excluded don't generate the data that would change the defaults. They leave. They find a workaround. They adapt. The system never learns. So the question I've started asking in my own work isn't "is this biased?" — it's who can the system not hear right now? Not satisfaction surveys — those measure the satisfied. Friction logs. Accessibility reports. The edges talking back to the center.

The same instinct applies to teams. Different lived experiences produce different default assumptions. A team that includes people who've been on the wrong side of a system's defaults will notice the defaults that a homogeneous team treats as neutral. That noticing is an engineering skill, not a diversity checkbox. The system prompt review, the context pipeline audit, the question of whose definition of "helpful" you shipped — these are design review questions with the same stakes as any architectural decision.

Inclusivity in AI isn't a separate concern from system design — it is system design. The same skills that make a good architect — examining assumptions, questioning defaults, thinking about who the system serves and who it doesn't — are the skills that make an inclusive one.

The difference between engineering and exclusion is whether you examined the default before you shipped it. When we call something "engineering" we scrutinize it, we review it, we test it. When we call something a "default" we let it pass.

Inclusivity debt is technical debt for who gets served. It compounds the same way. It resists cleanup the same way. And like all architectural debt, the most expensive kind is the kind you don't know you're carrying.


I think about the school data product sometimes. The sort order that put certain schools at the top. I changed it — eventually. But the thing I remember most is how long it took me to see it. Not because I wasn't looking. Because the default felt like the data, and the data felt like the truth, and I was the one who'd chosen what "truth" looked like.

Context accrues to context. The defaults will compound either way. The only question is whether you examined them before they became the architecture — or whether you're still calling them neutral because nobody's told you otherwise.

Tagged

  • ai
  • systems
  • identity
  • architecture