Sign up for Big Think Books
A dedicated space for exploring the books and ideas that shape our world.
Much of the ongoing discourse surrounding AI can largely be divided along two lines of thought. One concerns practical matters: How will large language models (LLMs) affect the job market? How do we stop bad actors from using LLMs to generate misinformation? How do we mitigate risks related to surveillance, cybersecurity, privacy, copyright, and the environment?
The other is far more theoretical: Are technological constructs capable of feelings or experiences? Will machine learning usher in the singularity, the hypothetical point where progress will accelerate at unimaginable speed? Can AI be considered intelligent in the same way people are?
The answers to many of these questions may hinge on that last one, and if you ask Blaise Agüera y Arcas, he replies with a resounding yes.
Agüera y Arcas is the CTO of Technology & Society at Google and founder of the company’s interdisciplinary Paradigms of Intelligence team, which researches the “fundamental building blocks” of sentience. His new book — fittingly titled What is Intelligence? — makes the bold but thought-provoking claim that LLMs such as Gemini, Claude, and ChatGPT don’t simply resemble human brains; they operate in ways that are functionally indistinguishable from them. Operating on the premise that intelligence is, in essence, prediction-based computation, he contends that AI is not a disruption or aberration, but a continuation of an evolutionary process that stretches from the first single-celled life forms to 21st-century humans.
Big Think recently spoke with Agüera y Arcas about the challenges of writing critically about AI for a general audience, how attitudes in Silicon Valley changed over the course of his career, and why the old approach to machine learning was bound for a dystopian future.
Big Think: Science writing often relies on metaphors, but metaphors can be a double-edged sword. By explaining the familiar through the unfamiliar, writers can sometimes overlook meaningful differences. What’s your take?
Agüera y Arcas: I try to minimize them partly because of the issue that you’re alluding to. They can lead to wrong assumptions.
When I say things like “the brain is a computer” or “life is computational,” some people interpret that as metaphorical — the same way we used to talk about the brain as an engine or as a telephone switching station. I don’t mean it metaphorically. I mean it literally.
Big Think: In your introduction, you refer to two groups of readers: the veteran researcher with little patience for pop-sci, and the casual passerby with little specialized knowledge. How do you keep your writing engaging and accessible to both?
Agüera y Arcas: This was by far the biggest challenge I faced writing this book. I tried to be minimal in what needed to be brought in. For example, I needed to explain some things about thermodynamics, and I thought really hard about how to do that in a way that wasn’t superficial but also not boring to somebody who already knows about it. In many cases, I tried adding a twist that would give experts a new perspective on a familiar problem.
Big Think: AI means different things to different professions. As a writer and researcher, does your personal attitude toward AI — your expectations, hopes, concerns — change depending on which hat you’re wearing?
Agüera y Arcas: My perspective about many things, AI included, changes depending on how far I zoom in. When I look at what’s happening day by day, it’s discouraging. It’s easy to get worked up about the things you see in the news, some of which are grim.
But when you zoom out and take a historical perspective on what people’s lives were like in 1900, for example, it’s hard not to see extraordinary positive trends — even if there have been many bumps along the way. I try to spend a healthy amount of time zoomed out, not only because it’s a more cheerful place to be, but also because history is accelerating. These days, zoomed out is not even that zoomed out anymore.
Big Think: Your career has spanned several cycles of AI optimism, stagnation, and breakthroughs. What kind of discoveries or personal experiences led you to arrive at the “against the grain” premise of your book?
Agüera y Arcas: I would like to say that I’ve been wise all along and, in my wisdom, steered the middle course while everybody else oscillates [between] AI optimism or extreme AI pessimism. Of course, my own mind has changed quite a lot over the years.
There were big thinkers involved in the early days of the internet and personal computing who truly believed these technologies would be liberating and inherently democratic. They grew very disenchanted when it turned out that countries can make giant firewalls and use the internet for surveillance or to spread mass disinformation.
It’s a bit like the timescales question. When you’re caught up in something and only see the potential, it’s easy to become hyperbolic about it. When the two-sided nature of a technology becomes obvious, you swing all the way the other way because you suddenly see that this is not the simple story you thought it was. But none of these stories is simple. They’re all complex.
Is it true that the internet, personal computers, and smartphones are not liberatory? No. They certainly have been for many people in the world in many circumstances.
Big Think: In the book, you mention David Graeber, the renowned anthropologist of capitalism and business culture, describing the disillusionment of the early 20th-century AI scene as a “secret shame,” a “broken promise” of technological advancements that didn’t materialize.
How do you recall that time, so different from the one we live in today, when progress appears to be speeding up once more?
Agüera y Arcas: I love David Graeber and miss his voice — he died a few years ago, way too young. I didn’t agree with all his takes, but I thought he was a fresh, innovative thinker.
That quote is from Utopia of Rules, [which] was published in 2015. This was ironic timing because, at that point, the AI revolution — or at least the neural net part of it — was well underway. That was right in the middle of what Jeff Dean called the “golden age of deep learning.”
What Graeber was writing about overlaps quite a lot with what economists have written about: the big slowdown in technological acceleration after 1970. When The Lord of the Rings author J.R.R. Tolkien was born, cavalry charges were still a thing, and by the time he died, we had the hydrogen bomb. That kind of upheaval is unparalleled, [but] the generation after Tolkien did not experience the same level of technological transformation. There was indeed a real slowdown after 1970.
We’ve entered another speeding-up period that began in 2020. I think AI is a huge deal — not only as a technology in its own right but also as a meta-technology that accelerates the development of other technologies. [It’s analogous to] electricity between 1870 and 1970. As I said, it’s ironic that Graeber was writing at what, in retrospect, may end up looking like the end of that slow period.
Big Think: Speaking of that period, you write that people who worked on rudimentary AI in the early 2010s didn’t really believe they were working on AI at all. Why not? Is it because the significance of their work became evident only in retrospect?
Agüera y Arcas: When Utopia of Rules was published, tasks like visual category recognition — having AI recognize a picture of a banana as a banana — were already working reliably. Handwriting, speech recognition, and similar problems were also advancing. In 2016, an AI even beat a human at Go, a game that had resisted classical computer science methods for many years.
All of this progress was being made with neural nets, brain-inspired architectures that differ from previous iterations of AI, called Good Old-Fashioned AI (GOFAI). This was the source of our optimism: real progress toward real AI using brain-like approaches.
The idea that general intelligence — the ability to use language in general ways, to understand concepts, to reason — would just emerge from neural nets that were trained on narrow, specific problems seemed implausible. Those systems had just a single goal: score 100% on a particular test. Many others and I thought we’d need much more fundamental insight into what general intelligence is before we could get to “real AI.”
The surprise was that when you applied AI in an unsupervised setting — not just training it for a specific task — we approached what appears to be general intelligence. That was a big shock.
Big Think: For those who are looking in from the outside, the question isn’t always “how” or “why,” but “so what.” In this case, what’s at stake in arguing, as you do, that intelligence equals prediction equals AI?
Agüera y Arcas: I think one of the biggest “so whats” relates to the fact that the old-fashioned way of thinking about artificial intelligence — as optimizing something, maximizing some test score — turned out not to be correct. And that was really good news.
When we were doing GOFAI, we were maximizing a test score: getting the transcription right, getting image category recognition right. If people think of artificial intelligence as optimizing a score, that’s very utilitarian thinking. It’s like assuming people, companies, or other entities are all about maximizing money or happiness.
The problem is that almost anything you tell an intelligent system to optimize will eventually go in the wrong direction. This is the moral of the Swedish philosopher Nick Bostrom’s paperclip maximizer. You give an innocuous goal, like “make paperclips,” but if the system maximizes paperclips, everything else goes to hell. That’s true almost no matter what you ask for. Anything you optimize or maximize without regard for how it’s done will result in a horrible dystopia. That was the basis on which Bostrom wrote Superintelligence (2014), a very scary book about how superintelligence could mean the death of people, the planet, even the universe.
In many ways, the theme of What Is Intelligence? is that intelligence is not the same as value maximization. We actually achieved general intelligence when we stopped doing supervised learning and instead did open-ended modeling of human output. That’s why I feel more optimistic about AI. I see it as part of an existing ecosystem, part of human intelligence, and not some alien monster applying bizarre inhuman thinking to optimize a problem we give it. I don’t think that’s how intelligence works.
Sign up for Big Think Books
A dedicated space for exploring the books and ideas that shape our world.
