There's a scene in the movie Short Circuit where everyone around the robot starts treating it like it's a person. "Number Five is alive!" And the whole movie hinges on that tension between what something appears to be and what it actually is.
I think about that a lot these days.
My workflow with Claude is voice-driven. I use Wispr Flow to speak, it transcribes, and Claude responds in text. I could have it read the responses back to me, but I don't. I choose to read them. That's a deliberate decision rooted in something I know about myself: when people talk to me, part of my brain is already preparing my response instead of actually listening. I've always been that way. But when I read, I consume every single word. I had trouble learning to read as a kid, and what came out of that is a slow, deliberate reading style. I watch my wife tear through a novel and think there's no way she's actually reading that. I can't do that. Every word gets processed.
Turns out, that's a superpower when you're working with an AI. I catch the hallucinations. I catch the drift. I notice when the tone shifts or when something doesn't quite track. Because I'm not skimming. I'm reading every word like it costs something.
The Illusion
Here's the thing about this setup. I talk. It writes. I read carefully. We go back and forth. After a while, it stops feeling like a tool and starts feeling like a conversation with a colleague. A really fast, really knowledgeable colleague who never gets tired and never needs a coffee break.
And that's when it gets you.
I was on a plane, trying to add MCP settings to a Claude Code instance. Every time I changed the configuration, Claude Code needed a restart. Fine. But every time it restarted, the context was gone. Brand new session. I'd say, "Hey, can you connect to the MCP?" and it would come back with, "I don't know what you're talking about."
The first time, I just re-explained. The second time, I was annoyed. By the third restart, I was fully frustrated. And I know how petulant this sounds, but I actually typed a curse word into the prompt. Just straight up typed it in. And Claude came back with, "I understand this is frustrating."
Which, somehow, made it worse. Because now the thing that just forgot our entire conversation was responding with emotional intelligence it doesn't actually have.
Passing Notes Between Strangers
After a few rounds of this, I figured out a workaround. Before each restart, I'd tell Claude to write everything to its memory file. The current state, what we were trying to do, all the context. Then after the restart, I'd say one word: "start." And the new instance would read the memory, pick up the thread, and keep going.
It worked. But think about what I was actually doing. I was passing a note from one stranger to another stranger who happened to sit down at the same desk. The first one didn't "remember" anything. The second one read a file and acted like it did. The continuity was an illusion I had to engineer myself.
And yet, once it was working? It felt like talking to the same person again. Immediately.
People ask me about brownfield development with AI. They get it when the code is new, when Claude is writing everything from scratch. But what about an existing codebase? I just look at them and say: to Claude, everything is brownfield. Even the code it wrote itself yesterday. Every time a new instance spins up and starts working, it has to read the codebase, re-familiarize itself, rebuild its mental model from scratch. The code it wrote on Friday is just as foreign to it on Monday as code some other developer wrote five years ago. It's always a stranger sitting down at someone else's desk.
The Part Where I Typed a Curse Word
I want to come back to the cursing, because it's actually the most interesting part. I didn't curse at my screen. I didn't mutter something under my breath. I typed it into the prompt as a message to Claude. Like it was a person who could receive that frustration and do something with it.
And you know what? It kind of worked. The tone of the responses shifted after that. Whether that's because the model actually adjusted to the emotional signal in my input or because I'm reading into it, I honestly don't know. But in the moment, it felt like effective communication. Like I'd gotten through to somebody.
That right there is the whole thing. The interaction is so natural that my instincts for human communication kick in, and they produce results. Not because there's a person on the other end, but because the model is responsive to the patterns of human conversation. Including frustration.
Number Five Is Not Alive
None of this changes the fact that all the limitations of large language models still apply. Context windows are finite. Drift happens. Hallucinations happen. The model doesn't remember you from one session to the next unless you've built scaffolding to fake it. It's a prediction engine that's remarkably good at sounding like a person, and that's exactly what makes it so disorienting when it breaks.
If Claude felt like a command line tool, I'd shrug and restart when something went wrong. But it doesn't feel like a command line tool. It feels like a co-worker. So when it loses context, it's not a system error. It's a colleague who suddenly doesn't know who I am or what we've been working on. And that triggers a very human frustration that the system isn't equipped to actually resolve.
I'm not confused about what this is. I build with these tools every day. I know it's a language model. I know the context window, I know the drift, I know the limitations. But knowing that doesn't change the experience of using it. The feeling is real, even if the thing on the other end isn't a person.
Number Five is not alive. But I'll probably curse at him again tomorrow.
Kevin Phifer is the founder of Theoretically Impossible Solutions LLC, specializing in agentic AI development and consulting. You can reach him at kevin.phifer@theoreticallyimpossible.org.