What students really think of AI – The Minnesota Daily

What students really think of AI – The Minnesota Daily


STUDENT: What is this? 

INTERVIEWER: We’re doing a podcast about AI and like students and how like. 

CECI HEINEN: We recently set a table at Coffman Memorial Union on a windy, rainy Monday to talk to students about a pressing issue these days on campus. We even brought chocolates to entice students to talk to us.

SOPHOMORE COMPUTER SCIENCE: I just want a chocolate. Which one is a dark one? Ok let me get two of these.

HEINEN: But we didn’t need to worry. Students were eager to share their thoughts about the topic we came to talk about: their use of AI in their schoolwork. There was just one thing a lot of them didn’t want to share. 

INTERVIEWER: So what’s your name and major? 

JUNIOR STRATEGIC COMMUNICATIONS: Wait, I thought I was anonymous. 

INTERVIEWER: You can be anonymous. 

JUNIOR STRAT COMM: Okay. I’ll be anonymous. 

HEINEN: So we let them be anonymous, so we could get their unguarded views. And in some cases, they claimed to be speaking only in hypotheticals. 

SOPHOMORE SPORTS MANAGEMENT: Hypothetically speaking, if I did use AI, um, I would use it for classes that I just don’t really care about. I would just copy the questions into chat and then put them into the AI bot to get the answers and put them on the Canvas.

And then when I get them eventually right, I’ll just read that process over and over again. Or if I need to get ideas, I’ll just put in like the essay prompt to get ideas from me to write it because you can’t write an essay, or if just AI, it doesn’t make sense. 

HEINEN: My name is Ceci Heinen, and you are listening to In The Know, a podcast dedicated to the University of Minnesota. Today’s episode about AI is a special one, as Jeff Young, host of the Learning Curve podcast, collaborated with the Minnesota Daily on this story. 

Hello Jeff, welcome to In The Know!

JEFF YOUNG: Thanks, Ceci. So we teamed up because my podcast looks at how AI is impacting college education, and this is a topic that I’ve actually been covering for a while. Many of my listeners are educators or people leading colleges, and I think they are wondering these days what students really think about these tools, what they are doing with them, and how the presence of all these chatbots affects how students feel about the value of college.

HEINEN: Yeah for sure. So we set up our table, you might have even seen us out there, we brought our microphones, and we put up a sign that said: “We want to know how you use AI.” Our team of reporters talked with more than 20 students of various years and majors. 

YOUNG: We heard a wide range of views and experiences with this. Lots of professors that I talk to, and I think lots of folks outside of colleges, I think they have this idea that students are really abusing AI. And essentially hit the easy button when they have to do homework and just have the bot do it for them. 

And this really came through in this article that ran in New York Magazine this summer, it went viral you might have seen it, the headline was “Everyone Is Cheating Their Way Through College.” And it just had anecdote after anecdote of people just doing nothing as far as learning. 

And it’s true, some students we talked to at the University of Minnesota, they are cheating with AI, I think they would admit. But we also heard from lots of students who were kind of doing the opposite – figuring out how to harness AI to study better, so they learned more than if they didn’t have the tools.

JUNIOR STRAT COMM: The way I use AI in my schoolwork is I use the Google Notebook, and if I have a reading due, I will put the reading into the AI generator, and then it will create, like a podcast out of it, so then I don’t have to read a really dense reading.

JUNIOR ECONOMICS: Let’s say I had a statistics class, OK. I went to the lectures. I studied it, but I did not learn much. But for exam, I sat down I ask AI to teach me, teach me in the simple way as possible. You know, I made it in a way so it could learn, you know, how I understand it, you know, so, and that was a good experience.  

I just had an exam, right then, you know? And I did very good. And all I studied was from, you know, ChatGPT, you know, like I put my lectures and asked it to teach me in the best way possible. 

JUNIOR ENGINEERING: On occasion, if I really don’t, don’t understand the problem. I usually use it to try to find one step of a specific problem. Like, I don’t try to have it solve the entire problem, because that would be counterproductive. I just look to, sometimes I just can’t figure out one step and don’t have time to go to office hours. 

SENIOR EARLY CHILDHOOD DEVELOPMENT: I kind of just use it if there’s part of a direction for a paper and I don’t really know what it means because it’s a lot of fancy words. And I say, “Hey simplify it for me,” because I don’t know what this means.

YUSEF YUSUR: I use it to study. The Google Gemini has a “guided learning” thing where you can just give it whatever material you’re working on, and it’ll ask you questions that lead you to the answer. I think people using it to straight up just do their assignments are wasting AI. I think it’s like more of a tool than a get-out-of-work-free card. You know?

YOUNG: Those were four juniors and a senior that we talked to, majoring in industrial and systems engineering, economics, strategic communications, engineering, and early childhood education,  respectively. OK, Ceci, were you surprised, um, by these uses you heard here?

HEINEN: Honestly many of these were new to me. I have used Notebook Lm before for some of the readings that I have to do, and it can very scarily turn readings into a “deep-dive” podcast. Which is helpful but also slightly terrifying. For those who haven’t used that, it’s this free Google tool that uses AI to help summarize material in different ways.

For me it was interesting to hear about how each major had a different approach to using AI. 

YOUNG: For some of the students that we talked to, AI was helping to them push past something they just were stuck on. Especially when what the professor provided wasn’t working for them. In my reporting I have been finding that a lot of times it seems like AI is a band-aid for parts of the traditional teaching system that are kinda broken, or that don’t work for a lot of students. 

HEINEN: You’d think that many professors would want their students to use AI this way.

YOUNG: Yeah I mean, these seem like a good thing. But even some of these students we talked to who had found productive uses of AI also admitted to pressing that easy button in other cases, depending on the class or the assignment. It seemed to depend on whether the student perceived the class as valuable.

SOPHOMORE COMPUTER SCIENCE: I don’t know, there are some classes that just seem a little pointless to me. So I do use AI for that. 

SOPHOMORE SPORTS MANAGEMENT: I will be using AI for classes that don’t matter for my major. Because why am I taking a geology class when I’m working in sports?

SOPHOMORE UNDECIDED: It makes things go by faster sometimes. When it’s something like that I don’t necessarily need to do like, it’s not gonna help me like study, but I need to get it done. Yeah, like a little five point easy thing, get out the way so I can read, so I can do things that do matter.

YOUNG: Those were three sophomores. The second one is the same student we heard from at the start of the episode — he’s majoring in sports management.

HEINEN: It’s clear that students are using AI in a variety of innovative and maybe some questionable ways. Throughout all the interviews with students, fear was the common thread. One of those fears is the effect of AI on the jobs that college students are hoping to graduate into when they finish school. And a group of students that I know are getting hit hard by this are the Minnesota Daily’s staff, whose future jobs as journalists are in a state of limbo. 

And it’s not just a question of jobs, but a question of whether AI will take over mass communications or whether the human voice continues to be valued by readers? So in addition to interviewing students about AI at our table, we also reached out to a handful of MN Daily staff about their unique perspective on AI as budding journalists.

YOUNG: Yeah, both of us were really curious to hear the perspective of the student journalists at the Minnesota Daily because for them, these issues are not theoretical. They run a newspaper, their doing all the writing and editing and making podcasts. 

And so they are wrestling first-hand with whether to experiment with AI in their work to see if it could maybe improve what they offer to readers, or whether to stay away from it so that they hone their human skills without the crutch of these chatbots.

HEINEN: Yeah I think, overall, the general vibe about AI from MN Daily staff was extremely negative. Nearly all the half-dozen staffers that we talked to stressed a desire to maintain the human-to-human connections that journalism fosters.

City desk editor, Alexandra DeYoe and podcast reporter Grace Aigner were pretty passionate in expressing their desire for keeping journalism human.

ALEXANDRA DEYOE: Journalists are human and we cover human interest stories. We cover humans, even if it’s political, we’re interviewing humans. AI can’t do that. AI can’t understand humans like a human can. I think yes, we need to value the objectivity of journalism, but we also need to value the compassion, the sensitivity, the trust that comes with humans doing journalism.

Just I hate AI. Oh my God, I hate it so much. Um, it makes me so scared and whenever I see anything about AI in the news, it makes me wanna tear my hair out. Thank you for letting me say that.

GRACE AIGNER: It’s about empathy and it’s a very human thing to have ethics. It’s something that you learn from actual human experience of being a reporter and making decisions of, you know, whether to let a source be anonymous or not. 

How to cut quotes so that they’re not, they make full sense and they are in full context of what the person says, but they can fit into a story. Like all of those types of decisions are done through experience as a reporter. Like you can’t teach AI those experiences because they are not in a physical space talking to another person.

YOUNG: Ceci, you had mentioned to me that the MN Daily has seen some examples of where students were kinda turning in work, trying to get into the newspaper as a staff person, and their applications didn’t quite seem like their own work. 

HEINEN: Yeah, I’ve had several applications unfortunately for students who want to work with me on the podcast desk, and their cover letters kinda sounded bland and unnatural and clearly were AI generated. Which made me sad. 

YOUNG: Yikes. 

HEINEN: But our managing editor, Sam Hill, has also noted that I’m not the only editor that’s been seeing that.

SAM HILL: We’ve had a lot of problems with AI at the Daily. Journalism is all about, you know, writing your own stuff, getting your own information, verifying your own information. And when people use AI, first of all, it’s unethical that they use an outside source to get the information that they have to write as their job.

But also like when we’re getting cover letters produced by AI, work produced by AI, you just don’t know if people are A, competent and B, like doing the work that they need to.

HEINEN: A lot of folks here at the MN Daily felt this fear of using it in their own work, or many, like opinions desk reporter Amy Watters, were adamantly against it as they believed it would take away from their learning opportunities and the process of journalism. 

AMY WATTERS: I think there’s a growing level of apathy amongst students in general about schoolwork and the world and things like that, right. They care more about getting a good grade over the process. I do think students want to learn and I think if you removed grades out of the equation entirely, I’d say I think there’d probably be a lot less AI use.

But, people care more about the achievement over the actual process. And at the Daily, we care about the process, we care about the writing, we care about the learning, we care about the talking to people.

YOUNG: As I listened to these student journalists, I was reminded that putting out great articles, that’s just one goal for them. They’re also in a kind of unofficial classroom. They don’t get grades for doing the paper, but it is a training ground, a way to try out a profession they might go into after graduation. So efficiency does not make as much sense in that context as it would in like a commercial newsroom. 

AIGNER: We are very much approaching what we do with a mindset of that we are student journalists we are learning. So it must be done by us, you know? I think when you get out of that realm the questions of convenience and productivity and all those things, and those things are a part of the Daily but we have a slightly different focus.

HEINEN: When it comes to how much AI might play a role in journalism in the future, the staff at our student paper have differing opinions. Owen McDonnell, the video editor, believes that AI will soon become the standard for newsrooms. 

OWEN MCDONNELL: I think that within a year it is going to completely replace people who are writing briefs and breaking news. You’re just gonna plug it in and it will write it right away. I think that within a few years even, it’s gonna be pretty hard to distinguish a vast majority of stories from human and AI. 

The, the big human ones, the ones that are like, take six months of reporting. Those, like, maybe not, but like even then it’s like you just plug in all your information, you plug in how you write, like it’s gonna be pretty close.

YOUNG: Most of the Daily’s staff seem pretty optimistic about their future careers, though. With many of them talking about how AI is not really gonna replace the human touch of journalism. Grace Praxmarer, the copy desk chief, is one of many journalist undergrads who still believe there’s hope for the journalism market. 

GRACE PRAXMARER: Tell people not to be too concerned about it because I don’t really see how AI could replace journalists. It’s a very unique job and it requires a lot of face-to-face interactions, interpersonal connections, and, you know, having that unique voice, and I don’t think AI is capable of doing those things as well as humans could.

HEINEN: But time and time again, the conversation with these student journalists returned to their almost visceral feelings about AI.

AINGER: I hate it. I do like, I’m sorry. People are no longer thinking critically because they’re outsourcing their thinking to whatever. I’m not gonna talk about that specifically, but I don’t like AI. I think it’s weird and I, it scares me, and it all goes back to the human aspect of it. Like journalism was created as a public service for people by other people. 

It’s not okay to me. Like you, if you like, want to be a newsroom, that is supposedly publishing for people and you’re not gonna have people do that work? Like, it just does not make any sense to me. It’s very contradictory to like the core tenets of journalism. 

HEINEN: Student journalists are grappling with the unknowable future of AI. I will say, I do not think that AI will ever be able to make a podcast like this where you have to convince sources to share their thoughts and their feelings, so at least that’s reassuring. 

YOUNG: Yeah I definitely have to believe that’s right. We did learn from our reporting that many students are feeling anxious about how AI is changing the job market, though.

INTERVIEWER: Are you worried about your first job possibly being jeopardized by AI? 

FRESHMAN PSYCHOLOGY: Definitely, somewhat. I hope that it won’t be. I’m looking to major in psychology. 

POLITICAL SCIENCE MAJOR WILL: I just worried about entry level jobs especially in white collar business and other jobs. AI can just take. Would my degree be less valuable now than when I went into it two years ago.

YOUNG: That was a sophomore majoring in political science. But not all the students that we talked to were worried. Some looked forward to using AI in their jobs, like Yusef Yusur, an engineering major.

YUSUR: In industrial engineering we’re responsible for improving and innovating businesses. So I can see in the future I could just give AI all the info that I have and tell it what’s the most efficient way for this business to spend their money. And from there with my college education I could see if that’s a valid answer and use that. It would be a tool.

HEINEN: Or they felt their career wouldn’t really be impacted either way by AI, like that sports management major we keep coming back to.

INTERVIEWER: Are you worried about your first job being jeopardized by AI? 

SOPHOMORE SPORTS MANAGEMENT: No. Because it’s not. And I know it because I already have the job and I’m working it right now and we do not use AI at all.

HEINEN: Another layer of this whole AI story and how it relates to students is how university faculty are approaching AI. Syllabi in every university course now include a subsection specifically related to AI use. 

Every instructor is allowed to set their own rules and regulations about whether and how students can use AI in assignments, leaving students facing a confusing landscape, where the rules can vary from class to class. Many students we talked to said their professors have gone down the route of banning AI from their courses. 

WILL: Most professors have a policy against AI use, especially in essays and stuff. Hence why, last semester, one of my professors moved all exams to blue book.

TOBY WILLIAMS: My professors on the technology side are very adamantly against using it, especially when we are still learning some of the basics. Just because if you are kind of using AI to substitute learning the basics, they believe once you advance in your sections or your classes, you really haven’t learned much if you’re substituting those basic processes with AI. And when AI can’t help you, then you’ve got no sort of background or footing to propel yourself forward on.

JACKSON BUG: I definitely think they’re anti-AI more so they’re not very lenient on it and they’re like, we’re gonna check all of your grades and make sure that nothing is being used. Limiting it is going to be more challenging. So I think kind of finding a way to be able to use it while also making sure it’s not only what students are using.

YOUNG: The reality is that professors can’t really tell when students are using AI or not, at least not for sure. There are AI detectors that promise to help, but these have been proven unreliable, or even worse, they’re often falsely accusing honest students of using AI, especially when the student’s native language is not English. 

The University of Minnesota’s Teaching Support website actually outlines a lot of these problems with these detectors for AI, and it has a statement that says: “the use of AI detection tools is not recommended.”

And it seems like many of the students that we talked to realize that professors can never definitively tell, such as an anonymous second-year computer science major we talked to.

SOPHOMORE COMP SCI: I feel like they kind of give you like the whole spiel, like at start of class like, “Oh, we can tell,” but they never tell. I don’t think they’re, I don’t think it’s able, I don’t think nobody could tell something’s AI even like Turnitin. Like if you look at it when they, when you get flagged for AI it’s like, I, I don’t really see it looks like natural language at this point. 

Especially if you’re smart and you don’t just like directly copy and paste it. Like there’s no telling really. I think that the way they tell is by comparing other student solutions. I think that’s what points it out. But if you weren’t, if you didn’t have the other student solutions, you wouldn’t be able to notice.

HEINEN: This is interesting because we had a physics grad student who is a TA come up to the table for an interview to talk about her experience grading lab reports.

GRADUATE PHYSICS STUDENT: Physics TAs can tell when you use AI to write your lab reports, mainly it’s because a lot of, you’re putting in a lot of stuff that we didn’t ask you to put in. We can tell, it’s like at a higher level we’ve set up the lab report so that you know how to do it.

I might see for every assignment, but it’s um, they have four lab reports to do a semester, so I might see like one person every time we have to do a lab report, submit something that has like AI in it.

YOUNG: But these students that TA was grading, I gotta say, they seem particularly clumsy in their AI use, or kind of showing they don’t understand what’s being asked. So I kinda guess that plenty of students that are hitting the easy button they do accidentally give themselves away with how little they understand the material. 

HEINEN: I will say, every professor I have had has known, or feels like they know. And I’ve met many classmates who have been caught for AI use in a paper or assignment and had to go through the university’s academic dishonesty program. 

And that’s another thing, the university is sending really contradictory signals about AI. Political science major, Will, said he was confused by the university’s recent deal with Google Gemini, which the university now provides to all students for free, and what message that promotes. 

WILL: They had an agreement with Gemini, so students who would get Gemini with their student fees. But I mean, there’s like cases of where there was a PhD student last year who was accused of using AI during his research. 

I’m pretty sure there’s countless other examples of students taken to, uh, honor code violations for AI use. And so the university just needs to put some real clear guidelines, and there’s a task force right now from the university on AI use, but I think because of how rapidly this is growing, that we need some real policy results now.

YOUNG: The case of this PhD student, it was covered in the Daily. Haishan Yang, a third-year PhD student at the U of M, he was expelled in January because of accusations that he used generative AI in his work. 

HEINEN: Yang sued the university and filed a complaint with the Minnesota Department of Human Rights as he claims he was wrongfully accused and faced discriminatory treatment based on his national origin. His writing was flagged for AI use, but he said that as a non-native english speaker, AI detectors can often flag his style of writing. 

YOUNG: We had multiple students cite this very case as a reason that they fear AI, they don’t want to be expelled. Ceci, you reached out to administrators at the U to try to get their official position. What did you hear?

HEINEN: Yeah, in a statement to the Daily, Lauren Adamski, director of the Office for Community Standards, said they began tracking academic integrity cases involving student AI use in Spring 2023. They’ve since found that cases involving AI have jumped from 26% to 39.4% from 2023 to 2025. 

She said that the U of M will continue to empower instructors to define their own parameters for generative AI use in their courses. And that there is a course offered by the university that covers AI basics, ethical use and responsible engagement with AI while learning, if students so please to take that course. 

Admanski stressed that when they receive a case from a professor involving AI they follow the same procedures and ensure that due process rights are respected and students know all of their options in resolving the case. They take evidence seriously and have high standards for what constitutes an academic violation. 

YOUNG: I have been talking to college leaders around the country, and sending mixed signals about AI, that’s pretty typical. Because really, universities are afraid of AI too. For one thing, it’s this huge challenge to academic integrity. I mean AI makes it hard for colleges to prove that students are learning anything, since it really is tricky to detect whether a student has submitted an essay or homework themselves, or just had AI do it. 

So there is this risk that the public will just lose faith in degrees if they feel that most students aren’t actually learning. And because of that, there’s a push around the country at colleges to revamp assignments to make them more AI-proof, by doing things like having students do projects AI can’t easily do or to move to old-fashioned things like blue books.

HEINEN: I’ve had several professors who switched to blue books, so I’d say that’s definitely happening here. I personally love writing in a blue book. I find it really gratifying when I can prepare for an exam and write it in a blue book with no resources except my brain. But I know that many students struggle with synthesizing ideas on paper, when we’re so used to having Google right at our fingertips. 

YOUNG: And now it’s AI too. And colleges also realize they kinda need to prepare students for jobs that are increasingly adopting AI, and they don’t want to look like they aren’t adapting to the generative AI revolution that it seems to be booming these days. 

HEINEN: There’s really no quick fix to generative AI’s presence in universities. And many students are still worried about not being taught the right skills for their future careers. Third-year biomedical engineering and genetics, cell biology and development major, Sam Thibodeau, thinks that allowing each professor to develop their own AI policies is counterproductive, and students just need one clear policy to go by. 

SAM THIBODEAU: I don’t think that the U is doing a good job at preparing students for using it because there’s not necessarily the most clear policy and it differs for every class on how to use it. And so with that, like I don’t think a lot of students see it. They either see it as a purpose for cheating in the class or like not using it at all because their professors like explicitly banned it. 

And I think the reality of it in the job is gonna be more of like as a tool. I don’t think there’s gonna be a lot of jobs where you’re gonna be able to like use generative AI for the entire job, like students do when they write essays or use it to solve problems. But I think it will be used as a tool in almost every profession. And I think like explicitly banning it kind of harms students’ ability to learn how to use it as a tool.

YOUNG: Yeah, and I mean I totally hear that. But in reality, the problem is that colleges can’t force professors to have the same policy on AI, just like they can’t force professors to have the same policy on late assignments. It’s all part of academic freedom. 

Of course those larger rules about how professors work at colleges could be changed, but this is just not something that is going to happen quickly, even if colleges are certain they wanted a blanket policy one way or another.

HEINEN: It’s definitely tricky. An overarching theme we discovered throughout this entire episode was a longing for human to human connection. Students are afraid of not only AI taking jobs and diminishing their education, but taking away their opportunities for connection too. 

AIGNER: Just all of it in general. It just makes me sad. Like humans are desiring ways to live less humanely. Creating things, writing things, solving problems, thinking about how they wanna structure an essay, making a grocery list, all of these things that are just parts of living are now being outsourced to a little robot. 

It makes me very sad. Humans are trying to escape the human life we live, it feels like, and that actually, like, I hate it. 

YOUNG: The future is uncertain, and young professionals and college students are the biggest group of people grappling with that reality. 

HEINEN: I hope that human connection wins out over a desire for further productivity and ease. Because learning, failing, struggling, finding new things and meeting new people are all quintessentially human and, in my opinion, are the reason we live life. 

Thank you for tuning into this episode of In The Know, and thank you Jeff for joining us today. Please go support Jeff’s work at learningcurve.fm

Thank you to Wren Warne-Jacobsen, Amy Watters, Atticus Marse, Grace Aigner, Lucas Vasquez, Callie Burch, Matthew Jegers and Vivian Wilson for supporting this story. 

Thank you also to Rob McGinley Meyers for helping edit the episode. 

This episode of In The Know was written and produced by Ceci Heinen and Jeff Young. If you have any questions, comments or concerns feel free to reach out at [email protected]. Thank you again for listening. 

I’m Ceci Heinen. 

YOUNG: And I’m Jeff Young. 

HEINEN: And this has been In The Know.



Source link