To leverage AI, chemists need to ask the right questions

To leverage AI, chemists need to ask the right questions


 

Chat applications such as ChatGPT, Claude, and Gemini that are based on large language models (LLMs) are artificial intelligence technologies that are already being used by chemists for tasks ranging from literature review and hypothesis generation to regulatory interpretation and toxicological analysis. These systems can summarize complex papers, suggest novel molecular combinations, and even simulate experimental outcomes. In short, they’re accelerating the pace of discovery. This kind of technology is no longer just a tool in the chemist’s toolbox—it’s becoming a research collaborator.

I’ve been exploring how LLM-based chats are reshaping the scientific discovery process in chemistry. In a recent study I presented at the American Chemical Society 2025 Fall meeting, my coauthors and I discussed the pitfalls of this transformation. As AI systems evolve from passive assistants to active cocreators, we must rethink how we interact with them. And how we ensure their outputs are trustworthy, ethical, and scientifically sound.

In the aforementioned study, chemistry experts described their interactions with LLMs as a mix of curiosity, trial and error, and cautious optimism. They appreciated the speed and convenience but remained wary of overconfidence and lack of nuance in AI-generated responses. One of the most surprising findings from our research was how much effort goes into simply asking the right question. Prompt engineering, the art of crafting inputs to get meaningful outputs from AI, is now a critical skill. Yet most chemists and experts in other domains aren’t trained in it. They rely on intuition, iteration, and a bit of luck.

As one chemist interviewed for the study put it, “I just ask whatever pops into my mind and then iterate.” Another noted, “You have to be really careful with how you ask questions . . . it definitely biases the response.” This trial-and-error approach can be frustrating and time consuming, especially when dealing with complex chemical terminology or regulatory language.

The burden of effective prompting often falls entirely on the user. That’s a problem. We shouldn’t expect chemists to become AI whisperers just to get reliable answers. Instead, we need systems that support them through guided prompting, templates, and validation tools that reduce cognitive load and improve output quality.

Trust in AI is growing, but it’s still conditional. Chemists in our study wanted more than just answers from the AI chat applications; they wanted sources, references, and context. “I feel better using it when it gives references,” one chemist said. Another emphasized the importance of traceability: “I expect something high level, but with the ability to ask iterative questions and dig deeper.”

This desire for transparency is especially important in chemistry, where decisions can have real-world consequences for health, safety, and the environment. Overconfident or incorrect outputs aren’t just annoying—they can be dangerous. That’s why ethical safeguards must be built into AI systems from the ground up.

I have been working on projects that aim to make AI more accessible and trustworthy for chemists. One way to do that is to design interfaces that support natural workflows rather than disrupting them. It means offering prompt libraries that capture expert knowledge and make it shareable. And it means creating “prompt helpers” that guide users in framing effective questions without requiring them to master the intricacies of AI behavior. One study participant summed it up perfectly: “I wouldn’t want the barrier of entry to be too high. I just want to ask my question.” That’s the future we should aim for: AI systems that adapt to scientists, not the other way around.

The integration of AI into chemistry research is not just a technical challenge, it’s a human one. It requires collaboration among chemists, computer scientists, ethicists, and designers. It requires listening to users, understanding their needs, and building systems that respect their expertise. As a human-centered computing researcher, I have been working with chemistry experts from different areas to understand and explore with them how AI technologies, such as LLM chats, can benefit their work practices and research.

As we move toward a future where AI is a true research partner, we must ensure that the partnership is built on trust, transparency, and shared responsibility. The goal isn’t to replace scientists, it’s to empower them. And that starts with designing AI that supports, rather than complicates, the scientific process.


A headshot of Juliana Jansen Ferreira.

Credit:
Courtesy of Juliana Jansen Ferreira

Juliana Jansen Ferreira is a computer scientist at IBM Research Brazil who is focused on human-centered research and is currently developing technology for the chemistry domain.

Views expressed are those of the author and not necessarily those of C&EN or ACS.

Do you have a story you want to share with the chemistry community? Send your idea or an 800-word submission to cenopinion@acs.org.



Source link