Experts Explain: AI as ‘normal’ technology | Explained News

Experts Explain: AI as ‘normal’ technology | Explained News


Ever since the artificial intelligence (AI) research organisation OpenAI launched its generative AI chatbot ChatGPT in November 2022, AI has been seen as a supremely disruptive technology that will soon fundamentally transform our lives — for good or for bad. Some believe that the impact of AI will be similar to that of the Industrial Revolution (1760-1840); others see it as a “superintelligent” agent that could go rogue and potentially take over humanity.

Is this likely? No, say AI researchers at Princeton University, who have argued that it would take decades — not years — for AI to transform society in the ways that some big AI labs and companies have been predicting. Here’s why.

Why do you think AI is in fact ‘normal’ technology — the transformative economic and societal impacts of which will be slow?

Story continues below this ad

Arvind Narayanan: AI being a ‘normal’ technology is the common-sense view. We hear about supposed AI breakthroughs every day, but how much of that is actually real? And even if the technology is advancing rapidly, our ability to use it productively is limited because there is a learning curve. If anything, stories about AI being adopted rapidly are even more exaggerated than stories about AI breakthroughs.

When we peel back the curtain, AI does not look that different from other technologies — such as the Internet. This is not to say that it will not be transformative. But if it is transformative, that will happen over decades, not months or years.

Festive offer
Annual global corporate investment in AI Annual global corporate investment in AI.

But so many have argued that AI is different from any other technology in the past, and that going forward, its influence will be non-linear.

Sayash Kapoor: We do not think AI is different from other past technologies in its patterns of tech development and societal impact. We have attempted to outline a vision for the future where AI is neither utopian nor dystopian, and we can learn from how past general-purpose technologies impacted the world.

Story continues below this ad

For example, many scenarios of AI takeover assume AI systems would gain tremendous power without first proving reliability in less consequential settings. In our view, this contradicts how organisations actually adopt technology. We think there are many reasons for businesses to ensure humans can control the AI systems that they adopt.

The rollout of technology like self-driving cars also shows this pattern — leaders in safety such as Waymo (a subsidiary of Alphabet, Google’s parent company, which provides driverless ride-hailing services) have survived, whereas laggards such as Cruise have failed. (In December 2024, General Motors, which owned 90% of Cruise, said it would stop funding the driverless robotaxi service.)

We expect that poorly controlled AI would not make business sense, and policy interventions can bolster incentives to ensure human control.

Your essay ‘AI as Normal Technology’ argues that the impact of AI will be slow, based on something called “the innovation-diffusion feedback loop”. What does that mean?

Story continues below this ad

AN: For much of the last decade, generative AI rapidly improved because companies trained them using bigger and bigger datasets collected from the Internet. That era is now over. AI has learned more or less everything that it can learn from the Internet.

In the future, AI will have to learn by interacting with people, doing experiments in the real world, and by being deployed in actual companies and organisations, because those organisations rely on a tonne of so-called tacit knowledge that is not written down anywhere.

In other words, as AI gets more capable, people will gradually adopt it more, and as people adopt it more, AI developers will have more real-world experience to use for improving AI capabilities. That’s the innovation-diffusion feedback loop. But since it involves human behavioural change, we predict that it will happen slowly.

A common fear is that if AI capabilities continue to improve indefinitely, AI could soon make human labour redundant. Why do you not agree?

Story continues below this ad

SK: If we look at how past general-purpose technologies such as electricity and the Internet were adopted, they took decades before the raw capabilities were translated into economic impact. This is because the process of diffusion — when businesses and governments adopt general-purpose technologies — unfolds slowly, over decades.

As this process unfolds, AI progress and adoption would be uneven. Tasks that are automated would quickly become cheaper and lose value, and human jobs would move to the parts unaffected by automation. Also, human control would be an essential part of many jobs, which would involve human oversight of automated systems.

Why do you think there is a need to focus on risks posed by AI that come from the deployment phase (when AI models are used for certain tasks) rather than from the developing phase (when AI models are being trained)?

SK: It is not enough to develop bigger or better AI models to realise their impact — the societal impact of AI is realised when this technology is adopted across productive sectors of the economy. This is a crucially overlooked intervention point for both the benefits and risks of AI.

Story continues below this ad

To realise AI’s benefits, policy interventions need to be far more focused on enabling adoption, such as by training the workforce or setting clear standards for procuring AI tools.

Similarly, to address the risks of AI, it is not enough to align AI models with human values. We need to address concerns of reliability, such as by developing fail-safes in case of malfunction. These cannot be addressed in the development stage alone; different deployment environments would require different defences.

In March 2023, several tech experts and leaders (including Elon Musk, Steve Wozniak, Andrew Yang, and Rachel Bronson) called for a temporary pause on the development of AI systems due to the risks they might pose to society. Will such interventions work?

AN: Suppose governments try to defend against AI risks by attempting to prevent terrorists or other adversaries from gaining access to AI. Here’s what will happen.

Story continues below this ad

They will have to take draconian measures and curb people’s digital freedoms to make sure that no one can train and release a downloadable AI system on the open web. This makes society less democratic, and thus less resilient.

It won’t even work. At some point, these nonproliferation attempts will fail, because the cost of creating powerful AI systems keeps dropping rapidly. And when they do fail, we will face those risks suddenly, and will have few defences against them.

By contrast, if we follow a ‘resilience’ approach (which involves preventing concentration of power and resources), it actually promotes the availability of open models and systems, leading to a gradual increase in risks — and we can gradually scale up our defences in proportion as well. Essentially, we can build up an “immune system”. Just as with diseases, immunity is a more resilient approach than suppression.

Arvind Narayanan is Director of the Center for Information Technology Policy, Princeton University. Sayash Kapoor is a computer science doctoral candidate at the Centre for Information Technology Policy. They are authors of the essay ‘AI as Normal Technology: An alternative to the vision of AI as a potential superintelligence’, published in April 2025, and the book AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference (2024).

They wrote to Alind Chauhan by email. Edited excerpts.





Source link