The recent rise in widespread AI usage has made professors throughout the University rethink what AI looks like within their classrooms and for their respective fields. Four University professors from different departments shared their revised policies for AI, giving The Sun their thoughts on how AI is affecting education and what to potentially do about it.
Prof. Jessica Ratcliff, Science & Technology Studies, Cautions Education Away From AI
Prof. Jessica Ratcliff, science & technology studies, has taken an interest in the role of AI, as she expressed that she has a “worried and critical perspective.” Her concerns for AI led her to create a group called “A-Why?” alongside Prof. Adam Smith, anthropology.
A-Why? hosted its first roundtable discussion, “Using AI in Humanities Research,” on Oct. 17. Five humanities professors served on the panel: Ratcliff, Smith, Prof. Jan Burzlaff, Jewish studies, Prof. Laurent Dubreuil, comparative literature and Prof. Sturt Manning, classics.
The event in Morrill Hall garnered 20 participants as each professor explained how AI has impacted them in regard to research, education and their field in general.
Ratcliff explained the purpose of A-Why? is to address concerns about AI as a collective body where people can communicate and learn from each other’s experiences.
“I think that we hopefully would be able to come up with some useful studies, guides, experiments [and] policy suggestions that can bring some of the issues that we’re worried about to the table,” Ratcliff said. She also hopes to “publicize” these issues and “change the way discourse about AI is going on on campus.”
Ratcliff recently returned from being on sabbatical leave last year, so she is in the process of adapting her class policies and structure to account for generative AI like ChatGPT.
In the instance of research papers, Ratcliff said that students can’t use AI to write the bulk of the essay, but if it is used in any way, the student needs to cite exactly how they used it. If a student relied too heavily on AI, there would be a grade penalty for undergrads, and it would be even more serious for graduate students since “they’re training to be professional academics.”
To combat the problem of students using AI outside the boundaries of her policy, Ratcliff said, “I am going to probably change the assignments that I did in the past for my bigger lecture classes, like less papers, more in-class exams.” The reasoning was that students would not be learning if they are not doing the work themselves if they’re using generative AI, “and, therefore, what’s the point?”
For her history classes, she said that she would go back to memorization of facts, names and dates even if it “has been very out of fashion in history teaching for a long time.” She explained that AI is not always factually reliable, so if students have historical facts memorized, they would have that information to draw upon.
While Prof. Ratcliff expressed her concerns with AI, she said that there is potential for AI to be useful. She is going to be experimenting with AI in her class next year, where students can use ChatGPT and Copilot as tools for their research on archived material.
She concluded by saying that “technological change only produces social progress through struggle, resistance and regulation, and we really need that in the present moment at the University level and beyond.”
Prof. Jan Burzlaff, Jewish Studies, Optimistic About the Potential of AI
Prof. Jan Burzlaff, Jewish studies and a Sun Opinion Columnist, is coming from a different angle than the rest of the professors spoken to, as he is optimistic about the potential AI has for education if used correctly.
“I think AI can deepen education if we approach it critically and collectively — as a sparring partner and a smart but flawed collaborator rather than a replacement for thought,” Burzlaff wrote to The Sun.
As a learning partner, he believes AI has the ability to teach students faster and effectively since it can make students more aware of the way they are thinking, as they “see what comes easily to a machine and what still requires human nuance.”
Burzlaff is currently teaching “GERST 2567: The Holocaust in History and Memory” for the fall semester, while in the spring he will be teaching “JWST 3825: The Past and Future of Holocaust Survivor Testimonies.” Both of these courses look at how people learn, remember and represent traumatic pasts, but also how digital technology is reshaping that process.
For his fall course, students are not allowed to use AI for their writing because the goal of the class is to practice the skill of slow historical interpretation and ethical listening. However, Burzlaff’s spring course will utilize AI as a central part of the major assignment.
“Students will analyze Holocaust survivor testimonies using AI tools (ChatGPT, Gemini, Claude, etc.) and then critique what the machine gets wrong,” Burzlaff wrote. “The aim there is to teach discernment — not prohibition.”
Burzlaff further explained that his reasoning behind putting in place those policies was so students learn how to recognize the difference between “writing as a deeply human act that requires patience and doubt, and writing alongside a machine that mirrors our reasoning but lacks empathy and moral intuition.”
When asked about how the University can create policies regulating AI, Burzlaff warned against a “one-size-fits-all policy” because of how it impacts different fields in varied ways. However, he also emphasized the need for a more complex policy than outright banning AI.
“AI won’t replace the university, but it will test whether we still believe in what a university is for: slow thought, uncertainty, and the shared work of meaning-making,” Burzlaff wrote. “The challenge isn’t to ban the machine — it’s to stay more human than it is.”
Prof. Hadas Ritz, Mechanical and Aerospace Engineering, Stresses Academic Honesty with AI Use
Prof. Hadas Ritz, mechanical and aerospace engineering, said that her overall course policy has always been academic honesty, even before generative AI became widely prevalent.
Like many other professors, the bulk of the assessment for student understanding has been during in-person exams. As such, Ritz encourages students to complete the homework using any sources they want.
“The reason that that’s my policy in the classes that I’ve been teaching lately is because the homework is strictly an opportunity for students to learn the material. So if they’re not putting in the effort, they’re only cheating their own understanding, and that is going to show on the exams,” Ritz said.
Ritz also said that students need to cite on their homework who they worked with, what sources they looked up or if they looked up the answers on Chegg or had ChatGPT solve the entire problem.
While Ritz has experienced some students using AI to improve their understanding and explore a topic, she said that there are also a lot of students who use AI as a tool to bypass their own learning opportunities.
In regard to her engineering field, Ritz said that the landscape is rapidly shifting in terms of what is possible, what will soon be possible and what we can trust AI to get correct and not correct. She said that it is an easy tool to misuse, which makes her cautious and makes her want other people to be cautious as well.
“If you’re using a calculator and suddenly it’s telling you that, you know, one plus two equals four, you’re going to say ‘that’s not right,’” Ritz said. “[AI] is a much more advanced tool, [so] people need to build up the capacity to understand if something is right or wrong — to understand if they should trust what they’re being told or not.”
Prof. Daniel Susser, Information Science, Expressed Skepticism
Prof. Daniel Susser, information science, is “skeptical and quite cautious” about any new technology, AI included.
In his course “INFO 1200: Information Ethics, Law, and Policy” and various other research courses, Susser said that it is “impossible to police AI use,” so his approach focuses less on regulating students through AI policies and more on structuring his classes to avoid AI. Similar to Ratcliff, he plans on administering more in-class exams instead of assignments — which could be subjected to generative AI — in order to decrease the chance of AI use that may put some students at an advantage.
According to Susser, the University’s administration made recommendations to faculty on different models of how to incorporate AI within their classes: remove it entirely, incorporate it significantly or a hybrid of the two. The guide proved to be useful for Susser to brainstorm how he should structure his classes.
“In my classes, I go as much as possible for ‘don’t use AI.’ I don’t think it will be helpful,” Susser said.
As a professor, he said it’s hard to make students feel that spending time doing the work for a class will help them develop skills and benefit them in the long run. Susser also understands the allure of using ChatGPT instead of doing an assignment, but notes that he believes students lose out in the long term when they use it.
“I also personally have a hard time prioritizing my long-term interests over my short-term interests, so I really get it. I really don’t blame students for doing it,” Susser said. “But I do think that over time, people are going to have to figure out that it’s actually better for them not to use AI in ways that replace practicing hard skills, because that’s the only way you build them.”
As of the fall 2025 semester, the University published its Generative AI Services page, which includes resources and guides on AI for both educators and students to view. The University has supported multiple AI tools, such as Microsoft 365 Copilot Chat, Copilot in Windows, Adobe Firefly and Zoom AI Companion. However, they are still reviewing some AI-powered tools like GitHub Copilot, Microsoft 365 Copilot and Google Gemini.
The University continues to explore the next steps for AI in higher education, as they are encouraging community engagement by promoting undergraduate initiatives, utilizing advisory councils and researching AI with the AI Task Force.
The latest report from the AI Task Force was published in January 2024, which explored the risks, controls framework for administrative use of AI and opportunities for AI on campus.
Read More

