Chris Iwicki sat in the Great Hall in Global Commons, scribbling notes on a small notecard. He glanced up at his laptop periodically. He was studying. He was studying the day before, too, but he wasn’t relying on a notecard: He was asking ChatGPT to explain a math formula.
According to a March 12 survey from Elon’s Imagining the Digital Future Center, 52% of U.S. adults now use AI large language models like ChatGPT or Google’s Gemini. The survey noted that twice as many respondents said they used AI mostly for personal learning or planning compared to those who mostly use AI for work.
Here at Elon, some students — Iwicki included — use AI not for work or personal use, but as a way to study.
A common thread across students was that AI should be used for the right reasons. But which reasons are the right ones is less consistent.
Freshman Eddie Fermanian, who’s majoring in theatrical design technology, said he uses ChatGPT for his resume and to brainstorm for essay assignments.
“As long as you’re not using it to pass off as your own work,” Fermanian said of AI’s acceptable use.
Freshman Kenzi Maness, who is double majoring in history and political science, also saw AI use as acceptable when used for brainstorming.
“I think if you need help coming up with a bunch of ideas for something that’s not necessarily an assignment, maybe a title or something, then that wouldn’t be terrible,” Maness said. “But using it to write your whole entire essay or something like that would be not using it in the correct way.”
Maness herself uses AI for reorganizing her study guides, which she described as putting her words in a different order rather than an AI model generating words for her.
Junior biochemistry major Elizabeth Carroll does not use AI, and said “there are a lot of things that people need to do themselves,” but also said AI could be an acceptable starting point.
Mackenzie Pridgen, a sophomore majoring in acting and arts administration, said AI models could provide more specific answers than a normal web search. Iwicki, a senior finance major, said ChatGPT was helpful for clarifying concepts when he was confused by a professor’s explanation.
However, professor Sims Osborne, who teaches computer science at Elon, said that getting explanations from AI comes with the risk of learning something incorrectly.
“People can be wrong too, of course, but I feel like people are more likely to blindly trust LLMs than other people,” Osborne said.
Carroll said she could never see herself using AI for school.
“I tried it once with an organic chemistry question, and it just got it completely wrong,” Carroll said.
Osborne said ChatGPT, and other large language AI models like it, are very good at sounding confident because they can provide relevant words, but those words aren’t necessarily being used correctly.
“I use that for helping multiple choice questions,” Osborne said, “because, if you’ve ever tried to write a multiple-choice test, it’s really hard to come up with incorrect answers that sound good. ChatGPT is wonderful for that.”
However, Osborne isn’t fully against AI in the classroom. She said that, while the students in her introductory computer science courses can be easily misled by incorrect answers to broad prompts, students with a strong grounding in fundamental concepts can sometimes benefit from more specific questions.
“It’s the difference between if you’re asking a friend, ‘Hey, do you know how to do this one thing?’ where the answer would be a 30-second answer, vs ‘Hey, can you do my homework for me?’” Osborne said.
Elon’s Office of the Provost provides a generative AI statement on its website, which states that students are responsible for adhering to AI policies set by faculty. One of the prohibited behaviors in the “Cheating” section of the 2024-2025 Elon University student handbook is using technology not previously authorized by an instructor for a test or evaluation. In other words, while AI may be useful, students can still be penalized for using it inappropriately.
