Skip to content

Raising Humans in an AI World

By Bri Stanback 12 min read

My daughter is three. She's figuring out spoons, opinions, and the word "why." I'm figuring out what to teach her when half of what I learned is becoming obsolete.

I build AI systems. I spend my days thinking about how machines learn, what they can do, where they fail. Then every morning I drop her off at preschool and watch her clip the sternum strap on her puppy dog lunchbox backpack by herself. My instinct is to rush — I can feel the line building behind us, the clock running — but I let her. If the strap is twisted, I'll scaffold: straighten it, hand it back. But I don't clip it for her. This is essentially Montessori — the child chooses, the adult steps back.

She also insists on climbing down from the car herself. And lately she's wanted to buckle her own carseat before we leave the house. She struggled with it at first. If I tried to help — even once — she'd have a full meltdown. We'd have to unbuckle everything and start from the beginning. The whole sequence, from the top. Her terms.

I used to find this exasperating. Now I think it might be the most important thing she does all day. The patience to be bad at something while your body figures it out. The insistence that the struggle is hers. RIE calls this "ceremonious slowness" — observing without rushing to fix.

There's a hypocrisy here I should name. I spend my working hours building systems that erase exactly this kind of friction. Specifically: I've spent the last year encoding twelve years of engineering judgment into constraint systems for AI coding agents — the architectural patterns, the testing standards, the domain knowledge that used to take years of scar tissue to accumulate. It's opinionated and specific to what we're trying to accomplish; it's not generalizable, and it will almost certainly change. But that's the point — it's my judgment, crystallized (for better or worse), so that a junior engineer with AI can produce work that reflects patterns I took years to learn. The upside of writing it down is that it's interrogatable — the team can push back, add their own scar tissue, evolve it. It's not sacred. It's a draft of what we think we know.

And then I come home, and I choose the slow thing. I guard her right to fumble, even when it costs us ten minutes we don't have. I am building the thing I'm protecting her from.

But not all friction-removal is the same, and I know that. A kid with dyslexia using text-to-speech isn't losing struggle — they're gaining access to the page. A researcher using AI to synthesize papers isn't skipping the thinking — they're getting to the thinking faster. Some friction is developmental. Some friction is just a barrier. The hard part is that I can't always tell which is which — and neither can the tools I'm building. They remove friction indiscriminately, and the sorting is left to the human on the other end.

And then I think: will she even need to tell the difference?


#The uncomfortable thoughts

I'm not worried about AI taking her job. Not exactly. I'm worried about something subtler — that the process of becoming competent might change shape before she gets there.

I learned engineering by being bad at it. Slowly. For years. I wrote code that broke. I debugged it at 2am. I felt the specific embarrassment of a production incident that was my fault. That scar tissue became judgment. Not because suffering is virtuous — because repetition under consequence is how humans internalize pattern.

AI compresses that. A junior engineer with Claude Code can produce senior-looking output on day one. The code compiles. The tests pass. But the judgment didn't form — the thing that tells you this works but it's wrong for our system, the thing that comes from having been wrong enough times that your body knows before your mind does.

I know these are different scales. A three-year-old wrestling with a buckle and a twenty-three-year-old shipping code that passes CI are not the same kind of struggle. The developmental stakes are different, the time horizons are different, the costs of compression are different. But they share a structure: the slow accumulation of failure that becomes feel. And what I keep noticing is that the tools I build don't distinguish between the struggle that builds capacity and the struggle that just wastes time. They compress both.

So what do I actually want for my daughter? Not just skills. Not just "emotional intelligence" — the supplement every AI-era parenting article recommends, as if you can add EQ like a vitamin.

I think I want her to notice she's constructing herself.


#Beyond growth mindset

If you've spent any time around modern parenting advice, you've heard the Dweck gospel: praise effort, not ability. "You worked hard" beats "you're so smart." It's become the baseline — the thing every preschool teacher and pediatrician says now. And it's not wrong. But I'm starting to think it's not enough.

Growth mindset says: you can change. You can get better. That's a belief about capability. It still treats the self as a thing to be improved — like firmware you can update.

The deeper move is self-authorship. Not "can I get better at math?" but "who decided math matters to me, and do I agree?"

With a three-year-old, this looks small.

Instead of just: "You worked hard on that." Sometimes I try: "I noticed you decided to keep going when it got frustrating. What made you choose that?"

Instead of: "You can get better." Sometimes: "What kind of person do you want to be when things get tricky?"

She can't answer those questions yet. But I can ask them. And asking them changes me — it shifts my orientation from praising output to noticing agency.

The growth mindset kid believes change is possible. The self-authoring kid is awake to the construction. In a world where AI can do the skills, the second thing might matter more.


#The questions I can't answer

These are the ones I sit with.

How do you build frustration tolerance when AI removes friction? Sophie will wrestle with a puzzle piece for thirty seconds before looking at me. That thirty seconds is everything. It's where neural wiring happens. It's where patience forms. It's where she learns that discomfort doesn't equal danger. But her generation will grow up with tools that erase that space. Instant explanations. Instant rewrites. Instant solutions. If friction disappears, where does patience form?

Am I building on philosophies that assume a world I'm lucky to have? Montessori's emphasis on independence is deeply Western — the self-reliant child as the goal. Many cultures prioritize interdependence, communal learning, the child as part of a fabric rather than a standalone agent. Free-range parenting assumes a neighborhood safe enough to release a child into, which is a privilege, not a baseline. Even RIE's "observe, don't intervene" assumes you have the time and bandwidth to observe — that you're not working two jobs, that there's a second parent, that the margins exist for ceremonious anything. I keep leaning on these frameworks and I keep noticing who they were built for.

What does "showing your work" mean when AI did the work? If she grows up collaborating with AI — thinking with it — is that cheating? Or is that the work? We don't have a stable norm yet. The adults arguing online don't agree. By the time she's in middle school, the rules will have shifted twice.

How does taste develop in a world of infinite generation? When anyone can produce images, music, essays, code — what makes something good? Taste used to require effort. You learned what worked by making things that didn't. You built an internal compass through repetition and failure. If AI flattens the effort curve, does taste still form the same way? Or does it require new kinds of friction — constraint, curation, intentional limits?

When do I let her use AI? Not at three. That's easy. But seven? When she's stuck on homework and the AI can explain it more patiently than I can at 8pm with dishes in the sink? Ten? When she's staring at a blank page and the AI could help her draft the opening paragraph? Where is the line between scaffolding and displacement?

I don't have answers to any of these. Not because I'm nobly sitting with uncertainty — I just genuinely haven't had time to think them through. She's three. I'm still in the carseat buckle phase. The AI questions are real, and they're coming, but right now they're abstract in a way that which cup she drinks from at dinner is not. I'll cross those bridges when I get to them. For now I'm asking them in public because I suspect a lot of parents are carrying them quietly — and they matter more than most of the discourse about prompt engineering or model benchmarks.


#The Inheritance Problem

There's a parallel that keeps nagging at me. AI abundance feels like inherited wealth.

The research on generational wealth is sobering: 70% of wealthy families lose their wealth by the second generation, 90% by the third. The biggest factor isn't financial literacy — it's purpose. People who never had to struggle for something often can't find meaning. Same pattern with lottery winners. Not because money is bad, but because sudden abundance without structure is destabilizing.

If AI gives everyone "inherited" capability — you can build anything, create anything, produce anything — what separates the people who thrive from those who spiral? Probably the same thing that separates inherited wealth that lasts from inherited wealth that doesn't: purpose, discipline, something you actually care about making.

My daughter will grow up with tools that can do most of what I spent a decade learning. She'll inherit capability I had to earn. The question isn't whether she'll have access to power — she will. The question is whether she'll have a reason to use it that's hers.

That's what the carseat buckle is for. Not the skill. The wanting.


#The developmental question nobody's asking

Erik Erikson mapped human development as a series of tensions: trust vs. mistrust as infants, autonomy vs. shame as toddlers, industry vs. inferiority in school, identity vs. role confusion as teenagers. Each stage assumed a world stable enough that the tension could resolve — you figured out who you were because the roles you were choosing between held still long enough to try on.

I kept trying to name a new stage — integration vs. fragmentation, the ability to collaborate with systems that think differently than you without dissolving into them. But the more I sat with it, the less it needed its own stage. It's not a new tension. It's a new dimension inside the identity stage Erikson already mapped. His version asks "who am I among these roles?" The AI version asks "where do I end and the tool begins?" That's not role confusion — it's a boundary problem he never had to account for.

I feel it in my own work. Some days the line between my thinking and the system's output is clean. Other days it's porous — I can't tell whether an idea was mine or something I steered toward because the model surfaced it. Adults are struggling with this right now, and we had decades of knowing our own minds before the boundary got blurry. She won't have that baseline.

And this isn't just an abstract philosophical problem. It's showing up concretely — in how people relate to their work, their expertise, the years they spent mastering a specific craft. When AI can do the thing you spent a decade learning to do, the boundary question becomes an identity question: what was all that time for? What was ephemeral — the syntax, the APIs, the specific technique — and what was lasting?

Some people aren't struggling with this at all. And it's not just a software thing.

A radiologist who sees themselves as "the person who reads scans" is in trouble — AI reads scans now, faster and often better. But a radiologist who sees themselves as "the person who figures out what's wrong with you, and scans are one of my tools" hasn't lost anything. The tool got better. They got better with it. A graphic designer who is "the person who's great at Photoshop" is watching the ground move. A designer who is "the person who understands why this layout makes you feel something" — that person is fine. They were never the tool. They were the taste behind it.

The people who adapt are systems thinkers first. They see themselves as people who bend reality — who understand how pieces connect, who can look at a problem and feel where the leverage is. The specific craft doesn't matter. The runes change. The witch doesn't. But if your identity is "I'm the person who's good at this particular spell" — good at Python, good at hand-crafted CSS, good at reading chest X-rays, good at whatever the current incantation is — then every transition is a small death. You're not losing a tool. You're losing yourself.

The ones who weather it are the ones who were always the magic-wielder, not the spell. And what makes a good magic-wielder isn't just power — it's synthesis, integration, and judgment. The ability to pull from disparate sources, connect things that don't obviously belong together, and then challenge the result against what you actually believe is true and important. Understanding systems is the meta-skill that survives every tool change, because systems are what remain when the tools don't. But knowing which systems matter — having the taste to choose, the stubbornness to push back, the values to say this is worth doing and that isn't — that's the part AI can't replace. Because it doesn't want anything.

That's what I keep thinking about for my daughter. She won't remember a world without AI collaborators. She'll never have the baseline of pure solo cognition to compare against. So her identity can't be built on "I do this thing the hard way." It has to be built on something AI can't absorb: what she cares about, what she notices, what she chooses to struggle with when she doesn't have to. Not the skill. The orientation toward the skill.

And maybe that's what the carseat buckle is actually teaching her. Not how to clip a buckle — that's the rune, and it'll be irrelevant soon enough. What she's learning is that sequences have logic, that steps depend on other steps, that if you skip one the whole thing fails and you start over. She's building a systems thinker's instinct. The witch, not the spell.

Which is why I keep protecting her friction even as I spend my days eliminating everyone else's. Friction is the forge. The witch is what walks out after all her tools have melted.


#What I'm actually doing

For now, it's less grand than the philosophy.

It's standing in the preschool drop-off line, feeling the parents behind me, and not reaching for the strap. It's watching her unbuckle and rebuckle the carseat for the third time because I touched it and now it doesn't count. It's holding the answer in my mouth when she asks "why" — waiting to see what she'll build first. It's putting my phone down more often than I manage to. It's noticing when I want to speed her up because I'm running on four hours and a reheated coffee — and choosing, occasionally, not to.

And then I go to work and build the thing that makes all of this harder.

I don't have a clean ending for that. I keep wanting one — some formulation where the builder and the parent reconcile, where the tension resolves into wisdom. But it doesn't. I build tools that compress struggle. I come home and protect her right to struggle. I believe in both things at the same time, and I haven't figured out how to hold them without one hand undermining the other.

She's three. She doesn't know I build AI systems. She doesn't know the word "friction." She just knows that the buckle is hers, and if I touch it, we start over.

I'm trying to be the kind of parent who lets her start over. I'm also the person making a world where starting over gets harder to choose.

I don't know how to resolve that. I'm not sure it resolves.

Tagged

  • life
  • ai
  • identity
On the trail: Identity & Alignment