Since ChatGPT first made its splash entry into the market in November 2022, worries over the use of generative artificial intelligence and large language models have begun to surface. In recent semesters, we have seen faculty at Tufts begin to swap take-home papers for in-class exams in order to fairly test student understanding of course content and prevent cheating. In the Student Accessibility and Academic Resources Center, writing support staff have repeatedly been given new guidelines on how to deal with the use of generative AI in academic writing. Even within the Daily, we have received submissions that were suspected to be AI-generated and sourced using AI.
As editors of the Daily, we are concerned about the impact of AI on student journalism — particularly news reporting. Our field of work depends heavily on the dual objective of disseminating information in a timely manner and producing quality writing. To walk the fine line between originality and timeliness is already a challenge, making news more susceptible to replacement by generative AI and large language models. A key concern is that these platforms negatively impact the factual accuracy of writing — one of the most, or, arguably, the singular most important objective of news reporting. Generative AI can hallucinate false evidence and blur factual accuracy; since large language models are trained on vast sets of text that do not evaluate validity and are not up to date, they fail to provide the correct context needed to understand key quotes or interview statements. Further, generative AI takes away intentionality and can carry bias from a database rather than the author.
In the Opinion, Features, Arts, Science and Sports sections, individual voices and styles are prioritized and championed for their creativity and originality. The use of generative AI in these fields risks homogenizing writing style, and, in turn, completely negating the purpose of journalistic writing across these sections altogether. Even in News, where individual voices are more limited and a stricter writing style is followed, generative AI still pollutes its core: providing reliable information through original reporting.
While it would be foolish to deem all AI platforms harmful for learning, and we can remain open-minded about potential ways to legitimately and ethically integrate large language models into journalism, direct content generation should absolutely not be tolerated in campus journalism and at the Daily. There are unique ethical concerns to automating the writing process in journalism: it is to replace a field — whose essence and existence is defined by writing — with automation. Especially as student journalism is driven and operated by shared passion, where participation and contribution are often self-paced, there is even less reason to resort to AI — why try to even write an article if the entire writing and critical thinking process is removed?
As AI usage and anti-AI measures continue to rise across the Tufts campus, we hope the Daily never reaches a point where it must fundamentally change its writing guidelines to prevent generative AI use. After all, removing take-home essays altogether and changing them into in-class exams could dissuade the interests of students who genuinely wish to hone their writing and thinking skills. If the Daily were to make radical changes in its writing and editing process to prevent the use of AI, it could undermine the very work that drew students to campus journalism in the first place.
We hope the Daily can remain a place where writers are encouraged to develop genuine ideas. But this requires all of us to collectively understand the harm that generative AI poses to writing and completing tasks, especially when aiming to demonstrate creativity and authenticity.
