A University Task Force is working with students and faculty to offer more guidance about the new, more secure, AI tools.
MINNEAPOLIS — The University of Minnesota is taking a big step forward in its embrace of artificial intelligence, now offering free access to Google Gemini, and its full array of generative AI tools, to all students and staff who opt in with their university email address.
“It’s going to touch the way everyone at the university is working,” said CJ Loosbrock, the UMN’s Chief Technology Officer.
Loosbrock said the new partnership, which is offered as part of the University’s existing contract with Google, is an acknowledgement of how present the emerging technology already is.
Earlier this year, the University learned that at least 13,000 umn.edu accounts were using a free version of ChatGPT, raising questions about data security. Loosbrock said that’s a concern that Gemini is helping to solve.
“(Gemini) uses enterprise-grade data protection,” she said. “The prompts, and any of the back-and-forth responses or questions that are asked in the tool, are kept private.”
Security is just one of the topics that a new University of Minnesota Artificial Intelligence Task Force has spent months working to address amid the rollout.
“We want to provide an environment that is as safe for students to explore as possible, with the surrounding supports for those situations where there may be questions about academic integrity or scholastic misconduct,” said Caroline Hilk, Director of the UofM’s Center for Educational Innovation. “We’re in the infancy of considering the use of AI in educational applications. We are now at a point where we are providing the resources and training.”
The task force has already posted guidelines for appropriate use and it will be hosting workshops for faculty in addition to sending recommendations to the University President.
“I think it’s a good direction,” said John Cracraft. “The university needs to be preparing kids to enter a workforce where AI is like, the the focal point of your entire job.”
Cracraft says he has used a different generative AI program for a while now to help with everyday questions and for education support, but he says he’s concerned about the impact that he’s seen it have in school.
“It’s kind of just a feedback loop of AI,” he said. “One person has a question, so they put it into AI and then someone else expands on it with AI and it just feels like no one’s thinking for themselves anymore.”
Cracraft wrote about his concern in a new Op Ed: “We’re losing our love of learning and AI is to blame” published in the Minnesota Daily.
In the piece, he draws on a recent MIT study, which monitored brain activity of people who used ChatGPT to write essays, comparing them to those who used search tools and to those who used their brain only. After four months those using AI “consistently underperformed at neural, linguistic, and behavioral levels” compared to those using their brain only.
Kent Erdahl: “Is that something that this task force looks at and considers and is there a bigger question here in terms of not IF we can do it, but is it good to do it?”
Caroline Hilk – Director, Center for Educational Innovation: “The presidential task force has not taken up this issue specifically, but the issue of cognitive offloading as it’s been used in the literature, is a significant challenge. Your question alluded to if we can or if we should, and the question that I am asking and that we are asking across the university is under what conditions. And so there is a place for AI, but it’s thinking very seriously about when and where it should be used to maximize the learning potential.”
Cracraft says he hopes that means the university will work with everyone to find the best ways to safeguard and foster critical thinking and time when it seems to be at risk.
“I just hope they take this seriously and it doesn’t end up screwing my generation,” he said.