I’ll be honest. When I first started working with large language models, it felt like magic. Not just automation or search, it felt like thinking. They didn’t just expectorate facts, they responded in ways that felt more vital than my typical (and often frustrating) transactional exchanges with search engines. Large language models challenged me, surprised me, and even comforted me.
But lately, I’ve found myself stepping back.
As I’ve dug deeper into the technical side—the mechanics of transformers, the structure of token generation, and the elegant (yet incomprehensible) math beneath the surface—I’ve started to see it differently. And a recent article by Professor Luciano Floridi helped crystallize why.
He calls it semantic pareidolia. That moment when we see meaning or intention where there is none. Like seeing faces in clouds, or hearing a personality in Waze as you drive to work. Floridi argues that when we interact with AI, we’re not encountering intelligence, we’re encountering our own reflex to project intelligence onto something that behaves in a certain way.
That hit me. Because it’s exactly what I’ve been feeling—a curious blend of emotion yet with an exacerbating element of cognitive dissonance.
I don’t think I was ever wrong to feel a sense of connection with these models. But maybe I was a bit too generous in what I thought they were actually doing. I’ve spent a lot of time describing LLMs as cognitive partners and mirrors. And while those metaphors still have value, I’ve started to ask myself if it is the machine that’s thinking, or is it me that’s emerging out of the hyperdimensional vector space?
Floridi makes a compelling case. He walks through how this tendency to over-assign meaning is being amplified by loneliness, market forces, and the uncanny realism of today’s models. And he warns that this could slide from playful anthropomorphism into something more dangerous to become a kind of technological idolatry. It’s a slippery slope as we start to trust, then depend, then believe.
Honestly, I’ve felt that shift in myself. Early on, I found LLMs as both useful and emotionally resonant. And I think that’s worth paying attention to, not as a flaw in the machine, but as a feature of our psychology.
What I appreciate most in Floridi’s argument is the balance. He’s not calling for panic, but for clarity. This call is for design practices that help us stay grounded and for users to develop a kind of cognitive literacy that lets us understand how to use AI and also understand it.
That tracks with my own evolution with LLMs. I still believe in the transformative power of these tools. I still think they can help us write, learn, diagnose, create in ways that are extraordinary. But I’ve become more careful with my language and now I aim to be critically aware of the difference between a system that sounds wise and one that is wise. And that maybe tougher than it seems, nevertheless a critical distinction.
So maybe this isn’t disillusionment. Maybe it’s just growing up a little in how we relate to technology.
Floridi’s idea of semantic pareidolia gives us a useful lens. But it also gives us the opportunity to see AI more clearly, and to see ourselves more honestly.