My experience in one of my current classes makes me wonder about the point of academics in the face of generative artificial intelligence. On Mondays and Wednesdays from 11 a.m. to 12:15 p.m., I sit in a lecture hall and learn about environmental economics. I feel confident that everyone in that class would agree that climate change is a real and pressing issue, that greenhouse gases cause climate change and should be regulated and their usage rolled back, that contamination of water and air pollution by data centers and other sources harms the environment, and that the people who are unfortunate enough to live near these data centers shouldn’t suffer more than others because of chance or income status. And yet, despite the fact that AI exacerbates these issues, many people in the class use ChatGPT to complete our problem sets. Why?
Sometimes I think that people must not know how bad AI is, because if they did, then surely they wouldn’t use it anymore. But still, many people (in that class and in the world) know that the data centers that power AI exacerbate these consequences, and they use them anyway.
Seeing this happen makes me more cynical. What is the point of learning about a problem if we are going to continue to actively contribute to it? What is the point of learning about pollution and doing problem sets about it if someone is then going to do something that pollutes anyway? I know that AI is not the only thing that we do in our day-to-day lifestyles that causes pollution: Our standards of living require significant energy consumption, and most of our actions consume energy or resources to some extent. But generative AI is at a unique point where our energy systems are not fundamentally intertwined with and reliant on them the way that we rely on oil and gas. We could stop using AI tomorrow and be okay. But if we don’t regulate it, we might become too dependent on it, making it harder to roll back later.
In many ways, the rise of AI is just the same as other phases of industrial development in the United States and the world. The railroad and steel industries, like so many others under capitalism, boomed into monopolized industries that exploited their workers and the environment until they got so unethical that the government had to step in and enforce regulations, usually after massive civilian efforts to secure reforms. How is AI any different? Amazon, Google, and Microsoft own over half of the world’s hyperscale (over 30,000 square feet) data centers. Those living near data centers deal with unhealthy levels of noise pollution, unstable electrical grid access, and polluted water wells. They bear the brunt of everyone’s requests — no matter how important or unimportant your request is, it will still equal at least one water bottle being poured out and drastically affect the quality of life for those living near data centers.
It feels as though when people are given access to a resource, no matter how badly it might impact others, many of them will use it without asking questions or considering the trade-offs. It feels like so many people are already at the point where they’re ready to throw their hands up and say AI-related pollution is out of their control, which would mean that it’s up to the government to impose regulations on it. I’ve heard so many people say, “AI isn’t going away, we just have to accept it for what it is,” or, “In a couple of years, we won’t be able to learn without AI.” Is that really true? If we resign ourselves to the already widespread prevalence of AI, does that mean we won’t even try to advocate for a less destructive future? I’m just not sold on the idea that we won’t be able to learn without AI. As Catherine Shutt ’26.5 argued in an op-ed published earlier this semester, AI cannot replace the critical importance of your own brain synthesizing, searching, and working out problems as a part of the learning process.
Additionally, there is a significant difference between intentional, planned usage of AI and everyday, unnecessary requests. I would entertain arguments that some forms of AI, while still resource-intensive, may be sustainable and beneficial to humanity when used only for appropriate purposes. For example, a program called AlphaFold that uses AI won the 2024 Nobel Prize in Chemistry. It predicts how protein structures interact with other molecules, rapidly speeding up research and drug development. In this case, maybe using AI is resource-efficient and conserves materials. But I don’t know if day-to-day requests for simple tasks are making humans more efficient, and they’re certainly not conserving materials. Even though AI might seem like it’s making your studying life easier, it’s important to remember who is really in control of these platforms. The tech industry is not concerned with our well-being. It is concerned with making a profit. And it is certainly doing that.
If people feel like they can’t go about life without using AI, how much energy is that going to use? A tremendous amount. I feel like I’ve already encountered people who feel they cannot do anything without the assistance of AI. This feels eerily similar to the ways that smartphone and social media developers purposely made programs that were addictive to generate profit and traffic.
In a perfect world, the government would impose regulations on AI that don’t allow resource-wasting requests and only allow programs that would use fewer resources than the human-driven alternative would. But that doesn’t seem likely to happen anytime soon. Academic and professional institutions can, however, certainly play a role in how important AI use is and guide its cultural development. And maybe it is up to individual humans to decide how much we want to use AI and feed it information that trains it and makes it better.
Stella Rothfeld ’26 is an English major from New York City.

