I once had a student who used AI to generate several of his assignments. The first assignment in my class is an ethnography paper in which students discuss the culture they most identify with and the language of this community. It’s an assignment meant for my students to write about the things that are most important to them, so it was a bit surprising to see this student use AI to generate their work. At first, I told this student that he couldn’t use AI to write his entire essay or else I would have to fail his assignments. Other times, when I’ve met with students caught using AI unethically, they stop using it almost instantly.
However, what surprised me most was that this student continued to rely on it, even after my warning. It even got to a point where I was very close to filing an academic dishonesty report, but I was curious as to why he kept using AI to complete his work, despite knowing the consequences could include failing the course. When I asked him, he said that he wasn’t a “good writer” (something he had mentioned the first time we spoke) and that he had never written a paper longer than a page. He even went so far as to say that his high school “didn’t teach him anything.” As a result, he was very closed off, rarely spoke in class, didn’t participate in group discussions and was deeply insecure about submitting anything he had written himself.
I had another student, this time in an asynchronous online class, who likewise used AI to generate multiple assignments. When I emailed her to ask why she relied on AI so much, she told me the following:
“I had many [people] give up on me in middle school. I didn’t even try in high school—it was a struggle for me. It was easier for me just to use AI … I know I’m not smart enough.”
As touching as this is, I don’t think it’s an uncommon experience, especially for students from marginalized communities. (Both of the students I spoke with were Hispanic, first-generation students from small rural towns.)
I currently teach English at a community college in central California, and one of the reasons I love teaching here is because of how diverse my students are. They come from so many different backgrounds and experiences: Some are fresh out of high school, while others are returning to college after a short (or long) gap. Many of my students are also the first in their families to go to college.
For the rhetorical analysis unit, I show my students Sir Ken Robinson’s famous 2007 TED Talk “Do Schools Kill Creativity?” It’s still a great talk for understanding how a speaker can effectively use ethos, pathos and logos in oral communication, but more than that, Robinson’s message resonates deeply with students. When he jokes about how schools “educate them progressively from the waist up and then we focus on their heads and slightly to one side,” my students chuckle—not simply because it’s funny, but because they know it’s true. Many students have shared with me over the years how they often felt “dumb” in middle school and high school when they didn’t know what to do for an assignment or an essay and how that still affects their self-esteem in college.
Much like how I encourage my students to use Wikipedia wisely, I also encourage them to use AI ethically. Once my students have submitted an essay, I have them fill out a form indicating whether they used AI and, if so, for what purposes. Usually, about half the class will admit that they used AI, while the other half say they didn’t. Of those who do, they report using AI mostly for brainstorming ideas, citation help, proofreading or suggestions, and rewording or paraphrasing.
In my experience so far, it’s actually been rare for students to use AI to generate entire drafts. If anything, they may use AI to generate part of their essay, like an introduction or a couple of body paragraphs. However, as we continue to explore ways to implement AI in the classroom, we also need to remind our students of an oft-forgotten truth about learning: It’s messy, and it’s OK to make mistakes. When Robinson, speaking in 2007, says that education stigmatizes mistakes, my students in 2025 nod their heads and agree wholeheartedly.
When I speak with students who have been caught using AI, they usually do two things: They apologize, and they almost always say, “I’m just not a good writer.” I’ve always made it a point to let my students know that there’s no such thing as a bad or good writer—a statement that sometimes raises a few eyebrows. The real distinction, I tell them, is between experienced writers and inexperienced writers, because writing is a skill that can be developed. This is not a point I bring up only on the first day of class; it’s one I emphasize often. It’s now in my syllabus, in a section I title the “Effort Formula.” When I tell my students that effort and consistency—not talent—are what make someone an experienced writer, I want them to recognize it as a statement of truth. I even have assignments centered around the themes of failure and creativity, and I encourage rewrites and revisions.
In my experience, it has often been the students who didn’t receive the instruction they needed before college and who gained the least experience with the writing process who turn to AI. They are left feeling unprepared and insecure about their abilities, and this points to a larger problem in which mistakes are still stigmatized and completion (not competence) is often the norm.
There are many discussions right now about the exciting possibilities that AI can offer, and chatbots can certainly be valuable tools that can support learning and streamline the process of gathering and evaluating information. However, as crucial as it is to discuss the substantial downsides of AI, such as its environmental costs, we also need to examine the long-standing factors that sometimes encourage students to use AI unethically in the first place. If we don’t have these conversations now, much of our dialogue about the possibilities of AI inside the classroom will mean very little.
