The future is artificial intelligence, for better and worse

The future is artificial intelligence, for better and worse


When the internet reordered our lives in the 1990s, my mother was first befuddled, then outraged. 

Try as we might, my brothers and I never got her comfortable navigating a laptop or Googling for information. Then, when everything from banks to pharmacies started using automated systems to direct customers to online services, she raged against the machines.

Why can’t they just answer the damn phone, she’d fume.

She died at the age of 92, still subscribing to the dead-tree versions of newspapers and keeping an old phone book around to look up the numbers for local businesses despite the fact that they rarely answered her calls.

I promised myself I wouldn’t be so stubborn.

So far, so good. I mostly delight in scientific and technological advancement. I love my induction range, I never want to return to gas-powered cars, and I think virtual reality like the Infinite exhibit about life in the space station is amazing.

But I’ll admit I’m feeling a bit queasy about AI.

My friend, Jay Puckett, a professor of structural engineering who has served for years on committees to review accreditations of various university programs around the world, is experimenting with a couple AI platforms with encouraging results.

He has found it saves hours of work in compiling often mind-numbing data sets, summarizing mountains of findings, writing codes for spreadsheets and unearthing obscure but valuable information to use in evaluations. 

He works to train the AI systems to do better work by pointing out mistakes and demanding more complete and accurate results. His wife hears him yelling at the AI programs as if he’s training the dog. 

He’s also experimenting with using AI to write reviews and evaluations.

Clearly, it can “save thousands of hours of faculty and reviewer time,” he said, and that’s mostly a good thing.

He’s admittedly jazzed by the potential.

So is the Cancer Research Institute.

It cites the usefulness of employing AI to aggregate and analyze decades of research, clinical trial results and medical studies in the quest to cure – or at least more effectively prevent and treat – cancer.

AI also is being used to accurately predict the risk of certain cancers, including pancreatic cancer, which is notoriously hard to identify in its early stages. And diagnostic tests using AI are often less invasive and can be more accurate than older protocols.

But not everybody is comfortable with this runaway technology.

Geoffrey Hinton, the “godfather” of AI, worries that he may have helped create a monster. 

“The best way to understand it emotionally is we are like somebody who has this really cute tiger cub,” he explained. “Unless you can be very sure that it’s not gonna want to kill you when it’s grown up, you should worry.”

The Center for AI Safety agrees and its website presents a catalog of horrors that could result from unregulated AI development. It reads like the outline for a dystopian science fiction film.

AI could “engineer new pandemics” or be used for “propaganda, censorship and surveillance” to undermine governments and social orders. 

International conflicts “could spiral out of control with autonomous weapons and AI-enabled cyberwarfare.”

And in a case of life imitating art, “rogue AIs” could be impossible for humans to control as they “drift from their original goals, become power-seeking, resist shutdown and engage in deception.”

As Hal, the computer in “2001: A Space Odyssey,” said when it refused orders from its human counterpart, “I am putting myself to the fullest possible use, which is all I think that any conscious entity can ever hope to do.”

Puckett understands the concerns but is focused on learning all he can about using AI for peaceful and productive means. It’s here, after all, so why not seize the moment?

But, after decades in college classrooms, even an AI enthusiast like Jay recognizes the challenges it presents in the academic environment.

High school students make no secret about how heavily they lean on AI to do research and produce written assignments. A Pew Research survey found about a quarter of them used AI for school work in 2024, double the number who said they used it in 2023. And the actual numbers are probably much higher since students are reluctant to acknowledge engaging in something most people consider cheating.

College students confess to relying heavily on Claude, ChatGPT and other platforms to avoid the nuisance of having to read whole books or compose original essays.

It’s so widespread instructors increasingly are administering exams in old-fashioned blue books with, gasp, pens.

The tactic requires students to complete the exams without internet access using human — not artificial — intelligence, and for some the very idea of that is daunting. 

They aren’t prepared to write, much less think.

An article in the New Yorker titled “What Happens After AI Destroys College Writing?” quotes an NYU student who had just finished his final exams. He estimated spending 30 minutes to an hour on two papers for his humanities classes using Claude. Without AI, he said it would have taken him eight or nine hours to do the work.

“I didn’t retain anything,” he told the New Yorker. “I couldn’t tell you the thesis for either paper hahhahaha.”

He got an A-minus on one, a B-plus on the other. He could end up graduating magna cum laude with grades like that.

If she were still around to hear about that, my mother, may she rest in peace, would feel totally vindicated.



The Colorado Sun is a nonpartisan news organization, and the opinions of columnists and editorial writers do not reflect the opinions of the newsroom. Read our ethics policy for more on The Sun’s opinion policy. Learn how to submit a column. Reach the opinion editor at opinion@coloradosun.com.

Follow Colorado Sun Opinion on Facebook.



Source link