AI vs. Human Brain: How Performance Breakdowns Mirror Each Other
A deep look at how AI works, and a surprising look at how AI's struggles reflect our own challenges with high performance
If you use AI every day—whether for writing documents, crunching numbers, coding, or creating visuals—you’ll start to notice something is a little... off. Like us humans, AIs have their “off” days. Days when the responses are less “wow” and more “what just happened?” Sometimes, they stall or make critical mistakes. Sometimes, they generate completely unrelated nonsense. Other times, they misunderstand the question entirely. It’s like talking to someone who’s distracted or just woke up. The good news? It doesn’t happen every day. But once you’ve seen it happen often, you start to realize: AI isn’t always the perfectly polished machine performance it's hyped up to be. It doesn’t respond with the same consistent reliability we’ve come to expect from traditional software or hardware.
Now, AI “off” days aren’t caused by mood swings or sleep deprivation, but strangely enough, they do resemble our bad days. Sometimes it’s due to limited computing resources being stretched too thin—kind of like how we feel when too many things are demanding our attention at once. Other times, it’s because of behind-the-scenes updates, new model versions, or system tweaks. While we expect these upgrades to be improvements, they can change how AI responds to prompts that used to work perfectly—something known as “prompt drift.” It’s a bit like how we change as humans: we learn new things, and our answers change, too.
Here’s another fun parallel—asking the same question repeatedly. For humans, that’s an annoyance tactic (interrogators use it for a reason). It wears us down. Interestingly, AI has a breaking point due to it, too. At first, it gives somewhat different versions of the same answer. Then, it starts to lose the plot. Eventually, it might start saying completely wrong things or glitch out entirely—what developers politely refer to as “bugs in the system.” [1]
AI systems are built to mimic the structure of the human brain’s neural network. Ironically, while most people have only a vague idea of how their own brain works, the rise of AI has sparked a massive interest in understanding how these digital brains function. So let’s unpack a few basics—and in the spirit of efficiency, let’s hit two “brains” with one stone.
First off, it’s helpful to know that computing is evolving into three broad categories:
Deterministic systems are the traditional kind. They follow clear, rule-based logic—“if this, then that.” You give them specific inputs, and you get predictable outputs. If something unexpected happens, the system throws an error. These are the reliable workhorses we’ve all known and trusted, generally known to work based on algorithms.
Stochastic systems that enable AI and large language models (LLMs). These systems don’t follow strict rules; instead, they operate on probability. That’s a big shift, so let’s break it down. Imagine an AI starting out like a human baby. Ask it a question and you’ll get gibberish. Over time, as it’s trained on real human dialogue, it begins to learn which words are probably going to follow others and how sentences are structured—much like a baby learning/ training to talk by listening and repeating.
As the AI trains, it starts to predict the most likely next word in a sentence, choosing one based on the highest probabilities. Those probabilities are refined during a post-training phase called inference, where the model is tested and adjusted. The “weights” it assigns to answers during this phase determine the final behavior you see when it responds—think of them as the parameters it’s built up and can now store.
When you or I get a question, we mentally weigh possible answers using knowledge we’ve learned over time. That process—taking in a question, considering answers, choosing the best one—is exactly what AI does during inference. The first time we consider something, we need to think. The second time? We just recall our earlier conclusion. Same for AI—it gets faster after inference, as long as it has enough computing power to back it up.
(A great explainer for how all this works is Andrej Karpathy’s deep-dive video into LLMs like ChatGPT—highly recommended if you're curious to peek inside the AI brain.)
Now, back to our computing types. We’ve covered deterministic and stochastic. The third is quantum computing, but that’s a whole rabbit hole of physics we’ll save for another day.
Here’s what I really want to leave you with: the human brain is so much more than a neural network. We are extraordinary beings with infinite capacity to learn, adapt, and grow—and we’re at our best when we work in teams and communities. AI is here to stay, much like the Internet and nuclear power before it. It’s new, it’s powerful, and yes, it’s confusing. But our brains? They were built for this kind of challenge. Every time we see, hear, or sense something new, our brains … they adapt!
AI tries to mimic one slice of our brain’s function—but the full picture? That mystery runs far deeper. And the more we explore, the more we realize just how much more there is to discover. So stay curious, keep learning, and keep growing—no matter what.