
One of the things people often bring up when talking about AI and coding is “hallucinations.” You’ll hear it framed as AI confidently doing the wrong thing, and therefore being unsafe or unreliable as part of a development team.
I code with AI a lot, and honestly, this does happen. Sometimes the output is clearly not what I asked for, and I’ll tell it to stop and start again.
But here’s the thing. I don’t get my own code right on the first try either.
When I’m solving a problem, I usually explore a few different approaches. I might write something, realize halfway through that my understanding of the problem has shifted, and then rewrite parts of it or throw it away entirely. That’s not a failure. That’s just how thinking works while building software.
So when AI produces code that misses the mark, I don’t see it as hallucinating in a human sense. I see it as the AI forming a reasonable interpretation of my prompt based on the information I gave it. Sometimes that interpretation aligns perfectly with what I want. Sometimes it doesn’t. And sometimes it exposes that my request wasn’t as clear as I thought it was.
In those moments, the “wrong” answer is still useful. It forces me to be more precise. It highlights missing constraints. It pushes me to explain the problem better, which often leads to a better solution overall.
If we treat AI like a junior developer who needs context, feedback, and refinement, the experience makes a lot more sense. The iteration loop is just faster, and the cost of trying another approach is close to zero.
In that light, AI hallucinations feel less like a fundamental problem and more like a natural part of collaboration. They’re a reminder that software development has always been iterative, exploratory, and conversational — and that hasn’t really changed just because the collaborator happens to be a machine.