AI reshapes the conversation about academic integrity — The Panther Newspaper

AI reshapes the conversation about academic integrity — The Panther Newspaper


With generative artificial intelligence (Gen AI) rapidly evolving, universities across the nation are working to find ways to manage its use in the classroom. Once a novel study tool, educators now fear that Gen AI tools are bypassing traditional learning and sparking a crisis of academic dishonesty.

Post-pandemic, Chapman’s Academic Integrity Committee (AIC) has seen a 400% increase in cases, according to the committee chair and biology professor Lindsay Waldrop.

“There are a lot of cases that involve Gen AI. Some of these are straightforward cases of plagiarism: some form of copying and pasting text directly from the output of some large-language model such as ChatGPT,” Waldrop told The Panther. “Some are not nearly as straightforward.”

The AIC looks into reports made by instructors about students’ alleged academic misconduct and takes the next appropriate actions, according to the university’s policies.

Following the release of ChatGPT in 2022, the 2023-2024 school year had a total of 158 cases, a record-breaking amount for the university, said Waldrop. While not every case was related to Gen AI or ended in violation, the committee has seen this trend continue.

Last year, Waldrop said that the committee saw a total of 128 cases generated by instructors. 

“From the standpoint of the AIC, our committee would like to see a decrease in the number of cases we’re seeing due to Gen AI,” said Waldrop. “There is a lot of confusion out there right now for both instructors and students, which is leading to an increase in academic conflict.”

Currently, policies for Gen AI use are left up to the instructor; however, Chapman provides faculty with sample approaches they can include in their syllabus, including a position statement on writing.

This academic year, Interim-Provost Michael Ibba is focusing on a more robust analysis of Gen AI, establishing a task force that would investigate a university-wide initiative framework to provide more clarity on Chapman’s stance on this issue.

“While strong work is underway across campus, we need to bring these efforts together as a coordinated Chapman initiative that maximizes our expertise and vision,” Ibba said in a recent email sent to staff.

Without a united policy, not all members of the faculty are on the same page about the use of Gen AI. Some professors have completely banned it and found ways to deter the use by altering essay prompts, allowing only hand-written notes or only allowing students to use flat devices in class, such as an iPad.

“I do not allow AI in my introduction courses and have not experienced any major violations in a few semesters now,” said Lewis Luartz, political science professor. 

Other professors have embraced its use by setting parameters in which it is appropriate for that specific class.

Philosophy professor Emily Adlam has found a way to incorporate ChatGPT into her Philosophy of Mind course. As part of a larger discussion about AI consciousness, Adlam created an activity where students try to get around the restrictions of AI. 

“It’s a fun way to think about some of these questions in a more practical way,” she said. “I encourage students to see it as a tool, but not allow it to become a crutch.”

Along the same line, professors can choose to allow students to use Gen AI in a limited way and have even taken student feedback into consideration.

Since ChatGPT dropped, English professor Samantha Dressel has been asking her students for feedback on how integrated they would like AI to be in the course during midterm evaluations. This year, Dressel asked on the first day and found varying responses across all her classes.

“In general, I think there are many reasons not to want to use AI, so while I have anyone in a class who objects, I’m still not going to require AI use for anything,” she said.

Dressel allows students to use Gen AI in assignment brainstorming, but not for writing full drafts. She also asks students to write a paragraph analysing their use of Gen AI at the end of the assignment, emphasizing the use of AI in a critical way.

“I feel like there is so much emphasis put on student uptake of AI, admittedly for good reason, that people seem to overlook how many students and others really don’t feel comfortable integrating the technology in some or all areas of their lives,” Dressel said.

While professors are seeing the impact of Gen AI, students are also questioning the value of their honest work.

“It’s confusing when each class is different,” said senior business administration major Mitchell Mohony. “I get that we are all busy and stressed, but it feels a little bad when I do the work for an assignment and someone else can plug it into ChatGPT and get the same result in seconds.”

While the university is working on a solution, professors and students will continue to navigate the age of Gen AI together.



Source link