Five best practices to minimize hallucinations
Among the practitioners sharing AI concerns in the Thomson Reuters Institute’s 2025 Future of Professionals Report, 50% cite demonstrable accuracy as one of the biggest barriers to investment of AI-powered technologies.
Hallucinations, of course, are a key reason for worries concerning AI accuracy. To address them, technology developers are activating several strategies to significantly reduce the risk of inaccuracies. At the same time, professionals are looking for ways to get the most accurate responses from AI.
These best practices, though distinct, overlap to some degree. The characteristics they share include human oversight, data quality, and the focused, strategic use of AI tools.
1. Human-in-the-loop verification
When professionals use AI tools strategically and attentively, it can deliver many productivity use cases. These benefits include automating routine (though necessary) tasks, streamline workflows, and reduce expenses. However, it is still critical for professionals to review the quality of AI-generated outputs which can help catch hallucinations before they are used in projects or cases.
Well-designed AI systems can rapidly gather, access, and sort through vast amounts of data. In addition, AI’s large-language models (LLMs) can continuously “learn” to improve the answers and outputs it provides to professionals’ requests.
The idea behind human-in-the-loop (HITL) is that AI should assist human beings, not replace them. In other words, AI tools for professional purposes need to be human centered. Accordingly, HITL interweaves human insight into LLM’s learning process. It requires developers to ground in real-world sources the training data they use to guide the AI model’s machine learning capabilities. Developers then use data analytics to review outputs for inaccuracies.
2. Prompt engineering
This technique is intended to reduce hallucinations through structured prompts, the instructions that set boundaries for a user’s query. The goal is to keep the AI platform from making things up.
A common example is for professionals to add the following prompt — “If the information is not in the provided text, say you don’t know.” This kind of verbal engineering helps force the AI model to generate outputs from the input datasets that the human developers build and supply.
3. Retrieval-augmented generation
Put simply, retrieval-augmented generation (RAG) is the practice of training the LLM to retrieve information from documents that specifically address a user’s query. For legal users, RAG-trained AI gathers and stores contracts, cases, and other legal documents rather than having the LLM deliver outputs from its own memory. Although there might be more work to set RAG systems up, this structured environment can reduce the chances of hallucinations.
RAG does have its limitations. Like AI in general, it’s stronger on highly specific queries than on abstract concepts and nuances. That’s another reason that human beings need to oversee AI development and results. Humans are more sensitive to nuance than LLMs, and they need to keep those nuances in mind when reviewing AI outputs for accuracy.
4. Grounding AI in authoritative data
This approach is similar to RAG in that it taps information sources that the AI developers have identified and verified as reliable and up to date. AI tools for legal practitioners, for instance, access case law datasets that the developer maintains under the guidance of legal professionals it employs.
In addition, a reliable professional-grade AI platform should provide its sources and citations used to generate outputs. This allows the human user to conduct fact-checking that verifies the information the platform delivers.
The attorneys in the opening example would have saved themselves a great deal of trouble had they reviewed the sources of their brief.
5. Guardrails to prevent unsupported outputs
Guardrails refers to mechanisms that AI developers put in place to ensure AI systems operate within defined boundaries. This helps prevent the system from generating results that are biased or inaccurate — or simply don’t address the user’s query.
Professionals can utilize guardrails by implementing validation procedures, establishing clear operational boundaries, using approved AI tools, and monitoring AI performance to detect potential deviations from expected behavior.
By establishing these safeguards, professionals can significantly reduce the risk of unsupported AI outputs.
How to tackle the accuracy challenge
One key best practice that professionals themselves can follow to minimize hallucinations? Implementing AI tools whose developers share those professionals’ commitment to accuracy and responsible use.
Again, hallucinations are an AI-accuracy problem. Solving it requires responsibility from both users and providers. Professionals need to maintain oversight, and providers must follow best practices to build trustworthy AI.
Our professional-grade AI technology addresses the accuracy challenge with CoCounsel, the industry-leading AI assistant for legal, tax, compliance, government, and other professionals.
- CoCounsel incorporates agentic AI, a newer form of AI distinct from generative AI. For professional users, agentic AI-powered technology can autonomously plan, reason, and execute multistep processes following predefined objectives which can accelerate complex workflows with quality and accuracy.
- Another CoCounsel agentic AI feature is Deep Research. This research capability breaks down complex tasks, sources its citations, and generates structured reports, all under human oversight and control. For legal professionals, CoCounsel integrates with Westlaw research platforms and Practical Law guidance tools to deliver accurate information and high-quality data regarding laws, cases, and legal resources.
Our commitment to responsible AI
Reducing AI hallucinations requires human vigilance. That said, the power of AI is reduced if professionals believe they need to spend a great deal of their valuable time doubting its outputs. CoCounsel makes it easier for users to trust AI and to reap its benefits — automating research, verifying sources, and delivering answers firmly rooted in trusted content.
In March 2025, Thomson Reuters further demonstrated its commitment to AI accuracy and ethical use when it achieved ISO/IEC 42001 certification. ISO/IEC 42001 is an international standard for entities providing or utilizing AI-based products or services. This standard specifies requirements for responsible development and implementation of AI management systems. We also became the first in our market to achieve FedRAMP in process status, which is more about security than accuracy but highlights our priority in best-in-class development.
Our approach to AI centers on delivering professional-grade AI tailored for complex industry challenges. Our strategy is built on four foundational pillars — high-quality data, domain and technical expertise, stringent security measures, and ethical considerations.
That’s why professionals who begin using CoCounsel today will be ready to securely apply AI to focus on high-value work that better serves you and your clients.
