The sophomore slump is real. On any given school night I am awake until 4:00 a.m., in a cold sweat, staring at my to-do list with a case of task paralysis so advanced that I would sooner drop out than read another page of James Joyce. So I scroll Instagram to take the edge off, only to be bombarded with an ad for yet another artificial intelligence homework app.
And each time I am tempted to reach into that yawning white void for a summary of a reading, or some feedback on a paper. Because everyone’s doing it and it’s just this one time and I’m so tired. But each time I stop myself, because we all know that AI is bad. Right?
I mean, it writes suicide notes for 16-year-olds and drains the planet’s water and makes it easier to cheat. These ads offer us relief from the very thing college is supposed to be for: learning. That’s all objectively bad.
Right?
But how can it be so bad if the University of Michigan has its own little AI friend with a cute little name: Maizey. How can it be so bad if the University proclaims its pride at being “the first university in the world to provide a custom suite of generative AI tools to its community.” Or take the end of the student AI policy, in which the University decrees that “your usage of GenAI-based tools can give you the means to better not just yourself, but also society as a whole, and there is an ethical responsibility towards doing so.”
Does this not feel like we’re being punked? Almost as if our administration is luring us into the maw of academic dishonesty, waiting to chomp down in the ill-fated moment that we type a calc problem into UM-GPT? This language frames AI as a moral good and an imperative for success — it is no longer optional.

If it’s not clear by now, I’m not a huge fan of AI.
My reasons are vague and numerous — probably about carbon footprints, something I read somewhere, my fear of sentient robots or just the annoyance of reading AI policies on syllabi where professors sound so resigned to us cheating that they meekly tell us it’s OK to use ChatGPT as a source as long as we cite it.
But honestly, I don’t understand AI enough to properly dislike it. And I’m curious as to why the University is pushing this tool so much if it inherently devalues learning.
So I immersed myself in artificial intelligence for one week.
To begin my odyssey, I took advantage of the University’s meticulous GenAI website. The site serves as a springboard for not only the titular “toolkit” but also a host of U-M-curated and U-M-created resources that detail how to properly use AI in various contexts.
It’s immediately clear that the University stands at a crossroads — to ban AI is to incentivize its use, but to fully endorse it is to promote its flaws.
Before diving into the tools themselves, I wanted to know what they do and how they do it. Good thing the University has more than 16 hours of online workshops (accompanied by a 16-page course guide) on general AI use and the toolkit.
Expectedly, these workshops are boring. In one video, they played an advertisement for the GoBlue app — an AI companion for the University community — in which a student wanders around campus asking her phone where the dining halls are. She even takes a picture of the menu in North Quad Residence Hall and asks the robot what she should eat for the highest protein option. The implication: This girl is smart enough to get into a top university, but too dumb to pick dinner.
Watching these tutorials, I was plagued by the pointlessness of it all.
For example, MiMaizey is a chatbot that can be synced with Canvas to answer course questions so the student doesn’t have to root around in the syllabus to find the absence policy. To this I ask, have we forgotten about Ctrl+F?
The “problems” these tools are solving are so everyday that they’re non-existent. The cool parts of the AI “toolkit” are its uses for highly-specialized projects — for “deep reinforcement learning in bioinformatics” or “metadata curation in social science data management.”
I study English and the Environment. I can’t appreciate these use cases because there is no literary equivalent to protein folding.
However, this did not stop me from testing out the tools myself.
I was cramming for a geology exam — poring over my handwritten notes because I’ve yet to succumb to the iPad plague — when I remembered I’m writing an article about AI and I have a hall pass.
Guilty that I was so excited, I booted up UM-GPT and Notebook LM, another University-promoted tool. Instead of flipping through a month’s worth of lined paper for the answer to the study guide, I just asked UM-GPT and it answered in wonderful detail. I typed up some questions and answers and fed them into Notebook LM, which morphed my study guide into not only flashcards, but a podcast.
I know I sound like a dumbstruck caveman because everyone’s been doing this for a while now, but using AI as a study tool was (tragically) awesome. I had time to clean my room, because I could multitask while listening to the robot-read podcast. I saved hours that I would have spent handwriting flashcards and rereading notes. I went to sleep at a reasonable hour and ended up acing the test.
While I don’t think that using AI made me pass, it definitely made it easier. I could have put in all the work myself, but who likes a martyr? With AI, I did the impossible: I conjured more hours in the day.
At this point, I was being slowly seduced by the possibilities, and so I did the unthinkable: I asked for a summary of my English reading. I’d already read the assigned chapter of “A Portrait of the Artist as a Young Man,” of course, but it still felt like a betrayal of my English major sensibilities. According to a list released by Microsoft, writers rank fifth among jobs most impacted by AI. But does using it make it less real?
Predictably, the robot’s responses were stale and uninventive. I tried to induce hallucinations by cross-questioning it about made-up scenes, and while I couldn’t fool it, it only provided me with skim-level analysis. I’d like to say that AI can’t get you an English degree, but if you were determined on being a C student, it might be able to. The analytical responses it provided were boring, but that doesn’t make them wrong.
Herein lies the uncomfortable truth of AI: It works.
If a chatbot can teach me geology faster than a professor, or summarize every book on my shelf, then the real question isn’t whether AI is a shortcut — it’s whether the entire model of college makes sense anymore.
In the age of accessible information, what is an education? And more importantly — why am I paying for one?
The danger of AI in school isn’t academic dishonesty. It’s an increase in efficiency that is so tempting, it has the potential to lead us to abandon the most valuable aspect of a university education: the human part.
While the University feeds us new shiny tools, we are in danger of forgetting the greatest untapped resource on any college campus — the professors. We are surrounded by human experts, yet we are being encouraged to take our questions to robots — which, in the immersive spirit of this article, I totally could have gotten away with.
Instead, I decided to talk to a philosopher.
“Why do people use AI to cheat?” asked Elizabeth Anderson, John Dewey Distinguished Professor of Philosophy, in an interview with The Michigan Daily.
“It’s almost always time pressure,” Anderson said. “And one thing I’ve noticed is students (now as opposed to in the past) are way too busy.”
AI promises to free up time, but in practice it just lets us pack in more. And as students juggling 18 credits per semester, part-time jobs, pre-professional clubs and if we’re lucky, a social life; all we want is more time.
It’s natural that the most time-consuming (but often most rewarding) pursuits — the reading, the writing, the humanities — will be the first to go.
“None of what we do here in the humanities is about producing content,” Anderson said. “It’s about developing and exercising the skill of disciplined thinking, tackling problems that are open-ended and don’t have determinate solutions, thinking hard about hard problems, even being able to come up with a good question. It’s, like, really hard. The outcome desired is not a product. It is … the acquisition of a skill, it’s all in the practice, the actual doing. It’s a process.”
We have lived in a product-driven world for quite a long time. The difference between then and now is that when Universities were first founded, their role was to teach us the value of the process.
And the truth is, the process sucks.
The process is late nights in the library, the constant self-doubt, the weekends lost to procrastination. But it’s also the pride in our good ideas and the goal that our minds will be worth more than our eventual degrees.
The process is painfully and wonderfully human, and we are in danger of losing it.
After a week of immersing myself in AI, reaching the milquetoast conclusion that there are pros and cons to its uses was not my plan. I even tallied my equivalent water and energy consumption, based on the 35 searches I made on UM-GPT, and I concluded that through my usage of AI, I drank an extra 1.5 liters of water and left a 100-watt lightbulb on for four hours — things I might have done without using AI.
So my slam-dunk conclusion isn’t about environmental horrors or individual benefits, or even about AI itself. It’s about the reasons we want to use it.
The robots we should fear are not the white-void chatbots, but rather the students who instrumentalize each moment of free time, lack fulfillment and aspire toward a level of efficiency that should only be achieved by machines. To forgo the shiny new AI tools and embrace the Herculean process of learning is not easy, and it takes a ridiculous amount of time.
But it’s up to us to decide whether or not we value the process.
Statement Columnist Siena Beres can be reached at sberes@umich.edu.
