As kids head back to school, no one knows what they need to learn to survive in a world where AI is designed to outsmart them. Photo: Julian Stratenschulte/dpa
dpa/picture alliance via Getty Images
“A kid born today will never be smarter than AI – ever,” Sam Altman, CEO of OpenAI, claims in the Huge Conversations podcast with Cleo Abram. Meanwhile, billions of children worldwide are heading back to school. To learn what?
What do we teach children when no one knows what they need to learn? How do we prepare them for a world where AI is designed to outsmart them? Where even computer science graduates trained in developing and dealing with AI struggle to land jobs?
In April 2025, President Donald Trump signed an executive order, “Advancing Artificial Intelligence Education for American Youth,” adding to the widespread assumption that promoting Al literacy and proficiency is the answer to what kids, educators, and anyone else aspiring to a job in the AI era needs. But the father of AI saw it differently.
Back To School To Play The Imitation Game
In his 1950 paper, “Computing Machinery And Intelligence”, Alan Turing not only laid the foundation for the development of the AI we know today, he also made clear which jobs AI can handle and which will always call for humans.
Unlike Altman, Trump, and others who think of children, education, and human work in light of AI, Turing thought of AI in light of children, education, and human work. In fact, he literally used terms like ‘child machine’, ‘education process’, and ‘human fallibility’ when describing how to build ‘learning machines’.
And while we now ask what and how to teach children to navigate an AI world, Turing asked what and how to teach a digital computer to navigate a human world. His answer was to teach his child machine to play the so-called ‘imitation game’. The imitation game, he explained, is played with three people:
- Player A, a man whose job is to pretend he is a woman
- Player B, a woman whose job is to tell and help player C uncover the truth, and
- Player C, an interrogator whose job is to ask questions of player A and B in order to determine who is the man and who is the woman.
Turing believed that “the question and answer method” built into the imitation game was “suitable for introducing almost any one of the fields of human endeavour” to be included in a learning machine. But unlike the human players, Turing’s child machine was not designed to play all the roles in the imitation game. In fact, AI was – and still is – only designed to play one of the three roles.
Back To School To Understand AI (Player A)
“We now ask the question, ‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?”
When I first read Turing’s article, I expected this question to be the first of three exploring what would happen if all the players were replaced by machines. But Turing never considered replacing player B and player C. Only A. And the more I think about it, the more brilliant I think it is. By designing AI to only replace player A, Turing didn’t have to worry about whether or not machines can think (the question he opens the article with but later dismisses as “too meaningless to deserve discussion”). All he had to do was design a machine to imitate the player who is already imitating.
In his 1950 paper, “Computing Machinery And Intelligence”, English mathematician and computer scientist Alan Turing not only laid the foundation for the development of the AI we know today, he also made clear which jobs AI can handle and which will always call for humans.
Pictures From History/Universal Images Group via Getty Images
Just as a man pretending to be a woman will make things up when answering questions he has no knowledge of – like how it feels to have a period or give birth – Turing’s child machine doesn’t need to know what it’s talking about in order to play the imitation game. It just has to convince its human interrogator that it does.
And that’s probably the first thing Turing would teach kids in the AI age: to understand the difference between knowing what you’re talking about and convincing others that you do – and to see the value in and the need for humans to strive for the former.
Back To School To Strive For The Truth (Player B)
By designing AI only to replace player A, Turing left the roles of player B and player C to us humans. And if he were asked how to prepare children for a world where AI is designed to outsmart them, it’s likely that’s exactly where he would focus: on strengthening their ability to play the role of B and C in the imitation game.
Surrounded by technology that is fundamentally designed to deceive, children must think of themselves as born to B) tell and help each other uncover the truth, and C) ask questions to discern and determine who and what to trust.
Unlike Turing’s child machine, humans are not designed or programmed to play a role in a game. They can play different roles in different games, and they can come up with games themselves (just as Turing did). The very fact that we distinguish between a game and a duty speaks volumes of human ingenuity. And although many have tried, no one has succeeded in putting this ingenuity into a formula.
While World Economic Forum’s Future of Jobs Survey highlights analytical thinking, creative thinking, and systems thinking as core skills for employees in 2025, and others point to critical thinking, curiosity, and empathy as human skills worth optimizing, Turing focuses on something far more tangible.
He compares the child-brain with a notebook as one buys it from the stationers: “Rather little mechanism, and lots of blank sheets.” And his explanation for why it will not be possible to apply exactly the same teaching process to the machine as to a normal child is that the machine will not be provided with legs, eyes, etc.
While kids have an innate ability to understand and live by the laws of nature, AI must be designed, programmed, and trained to do so. (Photo by Hector RETAMAL / AFP)
AFP via Getty Images
This omission of bodily features is what enables AI to speed up evolution, which is Turing’s stated goal. But it is also what enables us to succeed in the AI age. The fact that we have an innate ability to understand and live by the laws of nature, while AI must be designed, programmed, and trained to do so gives us an advantage that far exceeds the thinking and social skills highlighted in surveys.
And that’s probably the second thing Turing would teach kids in the AI age: not to listen to people like Altman who claims kids will never be smarter than AI – but to trust their own experience and judgment to guide them in their strive for the truth.
Back To School To Learn How To Ask (Player C)
Teaching kids to play the role of C is a tricky one. On the one hand it is harder than teaching them how to understand AI and strive for the truth. On the other hand it is just different. In the imitation game, player C knows that one of the other players is lying, but is unaware of who. This means that C cannot trust either A or B. And that C therefore must ask questions both to determine A and B’s gender, and to decide who can and cannot be trusted. However, knowing what to ask to determine if you can trust someone cannot be learned by reading a book or following a manual – which is why teaching kids to play the role of C is harder than teaching them anything else.
In an AI world of misinformation, deep fakes, and hallucinations, knowing who to trust is also more important than anything else, and it is therefore worth considering if there is another way. Anyone who has spent even minimal time with children knows that they always seem to have a question they’re dying to ask. So maybe there is no need to teach them? Maybe they already know what and how to ask? Maybe the tricky part is to cultivate an environment where they feel safe sharing their curiosity, doubts, skepticism, uncertainty, and wonder?
To teach kids to play the role of C, educators must focus less on teaching and more on creating a space for kids to ask. Kids know what to ask to determine what can and cannot be trusted, but they only do so if they trust themselves and the ‘game’ they are a part of. Whether it’s the game of school or the game of society, the rules that shape our lives need to be designed so that we feel safe sharing our questions.
And that is probably the third thing Turing would teach kids in the AI age: that the rules, structures, systems and technologies that shape their lives could be different – that, unlike their bodily features, they are not laws of nature and therefore can and should be challenged – and that the more someone or something presents itself as right, good or true, the more they must ask themselves and each other why that is: Why does Sam Altman feel the need to say that a kid born today will never be smarter than AI? Who is he trying to convince? What rules, structures, systems and technologies shape his life? And how do they promote or prevent the life we want for ourselves?
Turing reminds us that while AI’s job is to play the role of A, our job is to play all the roles in the imitation game. But more importantly, he reminds us that we have what it takes to switch between them. We can pretend that we know something that we don’t (like what will outsmart who in the future). But we can also trust our own experience and innate human ability to learn and grow. And we can ask the questions that help us distinguish and determine what is right/wrong, true/false, good/bad.
Most of us do all of this without even noticing. And maybe that’s what kids should head back to school to learn: that just because something isn’t noticed, surveyed, or programmed, doesn’t mean it’s about to be outsmarted by AI.