Something for the weekend – for the love of God, someone, anyone, PLEASE say something sensible on AI policy

Something for the weekend – for the love of God, someone, anyone, PLEASE say something sensible on AI policy


(Image by kinkate from Pixabay)

‘Next steps for AI development in the UK’ was the upbeat, innocuous title of a Westminster policy eForum last week. Finally, something methodical, practical, and strategic, you might think. Hmm.

Paul Henninger, Partner and Head of Connected Technology for KPMG UK opened proceedings with a lengthy keynote that was supposedly about ‘key considerations for building a globally competitive AI sector in the UK’. But really, it signaled that a luxury coach full of management consultants is about to pull up outside your building, with an invoice for a million dollars.

He said:

I don’t think many people need convincing that this is an important kind of technical, cultural, financial, and economic policy challenge that’s going to be here for a while now. But I do think it’s important to think about what’s at stake in terms of where in the pack we sit as the United Kingdom, in terms of that change. [Great!]

One way we’ve thought about this at KPMG ourselves is: How do we avoid the ‘Kodak moment’? The Kodak moment, which I’m sure you’re all familiar with…

Ugh, yes, well, we are all familiar with the millennial fall from grace of the film photography giant for not moving quickly enough into digital. Why? Because KPMG et al have been using it to describe the importance of grasping every tech change for the past 20 years; AI is merely the latest. But after 700 words and 10 minutes of this (during which some delegates just asked ChatGPT), Henninger finally said:

So, if you go to the next slide [please!], just some thoughts and observations from where we’ve seen this [what?] being successful.

A lot of announcements across the globe over the last six to 12 months have been focused on raising the ceiling on AI, by which I mean the investments we are making in data center capacity, in compute/processing capacity. Of the type needed to train and deploy AI models and the research capabilities you need for what… you know, on the one hand… foundation models. The sort of, you know, national access to some of the key technology. Geez, all of that’s incredibly important! 

But what I also wanted to focus on is the thing that will make, I think, the biggest difference in whether the UK achieves a competitive advantage – such as it is, i.e. raising the floor. And that’s being in that first quartile of creating value with artificial intelligence.

Being in the back of the pack is a risk of that Kodak moment! And being in the middle is sort of fine…

[Narrator: at this point I fast forward through several minutes of audio.]

But to the extent that we’re looking at having a competitive industry, there are some things we can do to accelerate the change. [At last. Tell us!]

So one is, as much as it’s important that we invest in companies that raise the ceiling in terms of what is possible with AI, we also need to look at the companies and technologies that help us manage the cost of deploying the technology. [Did he mean KPMG?]

But the thing that makes the biggest difference is leadership’s appetite for change and willingness to sponsor it extremely aggressively!

Ah. Let’s leave it there. We get the message: hand KPMG a massive check – but aggressively, mind, perhaps roaring like a hangry bear – and that coachload of consultants will tell you some more about Kodak. Or Blockbuster. Or Ratners. Or the horse-drawn buggy just before the motorcar was invented. Ker-ching!

Anyone familiar with this topic – AI policy, not Kodak – will know that it has sailed far, far beyond being a sensible discussion about economic and societal benefits, investment, regulation, innovation, and opportunity, and into a sea of claptrap – an ocean with such depths of absurdity, mediocrity, and dereliction of critical thinking that it rivals the Mariana Trench.

Now, just to be clear, I’m not referring to KPMG. No, AI vendors are to blame for this, not those management consultants whose business is about to be dismantled by an agentic chatbot.

‘AGI is within our reach!’, they shout. Not with LLMs it isn’t. And it never will be, as they preclude original thought (unless hallucination counts as original thought). “AI can do any job!” they shriek. Not with hallucinations worsening in more advanced models – a problem acknowledged by OpenAI this week.

On that point, it is almost as if the Web, from which training data is being scraped, has become polluted with generative slop. Last year, Europol predicted that most online content will be synthetic as early as next year, which means the pre-2023 Web, on which earlier LLMs were trained, represented the peak of humanity’s digitized, human-authored content.

The data pollution problem

So, logically, it is all downhill for reliable data now. All AI vendors can do at this point is find as many ways as possible to sell our stolen proprietary content back to us while pressuring governments to change the law and absolve them of theft – before one of 50 ongoing copyright lawsuits worldwide finds against them. And it will.

Yet the nonsense continues: “AI has reached PhD/genius level!” they shout. Nope, but Gen-AIs and LLMs are remixing PhDs – whose authors are both unacknowledged and unpaid. “AI will cure all diseases within 10 years!” they add. No, but some forms of it might speed up detection and drug discovery using open medical data – rather than stolen novels, songs, and movie scripts.

(By the way, if your company can only progress by re-purposing others’ IP, then why should it succeed?)

Don’t get me wrong: I’ve lost family, friends, and colleagues to cancer, heart failure, Motor Neurone Disease, and Alzheimer’s in recent years, while another friend is mortally ill with Cystic Fibrosis. So – believe me – I will be at the front of the queue, cheering, if AI cures those diseases. I’m no Luddite, just show me the cure and I’ll shout it from the rooftops.

And AI’s vast energy consumption notwithstanding – data centers use more energy than the whole of Japan – I hope it prevents climate change. I live on the seafront and would prefer that the ocean didn’t reach my third-floor window, and that Generation Alpha doesn’t grow up on a burning planet.

Yet LLMs have discovered nothing – literally nothing – because they can’t. To them, any new discovery would be indistinguishable from an error. Instead, they have uncovered and recycled research that most people just hadn’t read. (A useful outcome. But why not credit the authors?)

And no, gen AIs aren’t going to “cure cancer, or whatever” (Sam Altman) by scraping the work of Studio Ghibli for cynical clicks and giggles. But they are hugely popular.

Of course they are, because they promise instant, effort-free creativity (aka the recycled work of experts) to lazy primates who lack the will or talent to make something original first hand. Or who think using their own hands, eyes, and brains is pointless drudgery, despite it being what sets humans apart from goldfish.

Sadly, this democratization of creativity also devalues the very idea of human art, including financially. Just sign here, kid, and pay a sub to OpenAI!

But I digress. In this febrile market, it is almost impossible to sit through policy presentations without crying, so debased has national debate become.

Take March’s keynote to the AI UK conference from digital Minister Feryal Clarke, in which she claimed that AI heralds an era of “infinite productivity” and will make all levels of society equal. Poverty and injustice, begone. Just like that! Such statements are so obviously preposterous that you can’t believe an adult said them, let alone aloud and in public.

We have had nothing BUT tech innovation – new technology on top of new technology – since the dawn of the Web 30 years ago, yet UK productivity growth has been zero for the best part of 20. If simply buying a new technology delivered productivity growth at national scale, then guess what: it would have happened by now.

The disappointing reality of AI adoption

At this point you probably want some statistics, and I’m happy to oblige. The Global Enterprise AI study this month from a company with skin in the game, SS&C Blue Prism, surveyed 1,650 CEOs, CTOs, and senior IT leaders and found that while 92% of them are using AI to transform operations – because of course they are – 55% “admit they’ve seen little benefit.”

Most enterprises fall short and “struggle to turn AI potential into real-world performance,” explains the report. Meanwhile, more than two-thirds of advanced AI projects are abandoned. Those are NOT encouraging figures.

But still, surely investors are benefiting. Isn’t the AI Spring seeing dollars bloom for venture capitalists as the start-ups in their portfolio skip through the trees, snaffling your low-hanging fruit? Not so much.

Research published in March by asset management company Bowmore revealed that AI-focused investment funds had “failed to outperform other technology sectors, delivering average returns of just 2.5% over the past year.” That was compared to 4.3% for generalist tech funds, said the company. By comparison, videogaming funds averaged returns of 28.8%, and FinTech 17% year on year.

A month later, the news is even worse. Funds are now losing money year on year, says Bowmore – with average returns of -3.3% against the same period in 2024. By contrast, the S&P 500 had risen 8.8% during the report period, and the FTSE 100 by 7.1%. Something to do with unparalleled hype not translating into measurable ROI and business value, perhaps?

But soon figures like this will be hard to find, as AI-enabled search is taking us further and further away from source data and verifiable facts.

According to Forbes, a recent report from content licensing platform TollBit found that AI search engines refer 96% less traffic to external sites than traditional search. (Wait, all this IP theft and frenetic social marketing was just a coup on the world’s digitized data, to keep you trapped in walled gardens rather than link to the original sources? Gasp!)

So, beyond that global data coup – like the January 6 insurrection, but with everyone’s IP except the tech sector’s – why is all this happening?

Alas, we live in an era in which AI companies’ real-world profits are far less important than their share prices and VC valuations. As a result, it is a market built entirely on belief, so it demands an endless stream of CEO BS and hot air to keep the bubble inflating.

The echo chamber of AI hype

The problem is, there are so many acolytes, cultists, fake accounts and sponsored bot swarms on platforms like X that we all end up being gassed by this guff, as it is wafted far and wide – methodically and deliberately. After all, this is the technology that is so overwhelmingly important that it needs to be force-fed to us on every platform, as if we are mewling infants rather than sentient grown-ups.

Meanwhile, every government announcement about AI reads like the vendor press release it once was, so desperate is No 10 for growth, while critical voices are ushered aside.

Incredibly, my own question about copyright theft to the AI UK event last month was shunted away from the government spokesperson to whom it was directed and handed to Sonia Cooper, Assistant General Counsel at one of the world’s most litigious and IP-protective vendors, Microsoft. That’s accountability for you!

It is almost as if UK current government policy on AI is being dictated by vendors. Plus, the Tony Blair Institute for Global Change, which believes we should tear up copyright conventions – even though trade body UKAI thinks it is a bad idea. As do most MPs, every media organization, the UK’s creative sector, and the public.

So why does that august institution founded by one of the world’s most trusted and ethical individuals – stop sniggering at the back – support such a damaging and divisive policy? Well, who knows?

However, we can say that the Institute’s investors include Microsoft and Amazon. Plus, the US State Department and the government of Saudi Arabia. And its ‘innovation partners’ include Elon Musk’s SpaceX. Because all that information is in the Institute’s 2023 financial statement, which is available from Companies House.

Verifiable data! Who needs it.

My take

AI vendors: show me the real-world results for users – the ROI, not your financials or your Koenigsegg Regera.

More from the Westminster eForum in my next report!



Source link