A set of developments has been evolving forward that could have a major impact on the trajectory of artificial intelligence (AI) development in patient care organizations—and the fact of which organizations are involved is particularly intriguing.
As we reported this week, on September 17, the Oakbrook Terrace, Ill.-based Joint Commission and the Coalition for Health AI (CHAI) announced that they had collaborated to produce a new set of guidelines on AI development. The organizations posted a press release to the Joint Commissions website with the announcement, emphasizing that the release of these guidelines is the first of what the organizations anticipate will be numerous collaborative advances between the two. The aim of this ongoing collaboration, the press release stated, is “to advance the responsible development, deployment, and oversight of AI in healthcare by fostering collaboration across the health sector, including industry, government, academia and patient communities.”
The press release quoted Jonathan Perlin, M.D., the Joint Commission’s president and CEO, who said that “We understand how quickly AI is changing healthcare – and at a scale I’ve never seen in my time as a leader. From the moment we announced our partnership with CHAI, we knew we wanted our partnership to reflect that fast-paced dynamic, while still delivering a thoughtful and streamlined guidance for healthcare organizations to self-govern with AI.”
The press release went on to note that “This guidance, which features high-level recommendations for the Responsible Use of AI and is designed to be accessible, applicable, and adaptable for healthcare organizations at any stage of their AI journey. Specifically, it establishes that policies, appropriate local validation, monitoring, and use, to be flexibly interpreted and integrated into existing or new processes as deemed appropriate for the context of any organization. This guidance is meant to provide transparency into the Joint Commission-CHAI process, and community feedback on this guidance will be incorporated into future outputs.”
And it quoted Brian Anderson, M.D., CEO of CHAI, as stating that “The need is immediate, and we are eager to respond. This guidance and all subsequent playbooks are about keeping pace with the evolving field, not just by defining responsible AI, but by making it usable in hospitals and health systems across the country—no matter their resource level.”
The entire document, entitled “The Responsible Use of AI in Healthcare,” notes, among other things, that “The transformative opportunity that AI presents is not without risk… One of the primary concerns is the potential for AI errors, which could arise from algorithmic biases, data inaccuracies, or unforeseen interactions within the healthcare environment. These errors can lead to misdiagnoses, inappropriate treatment plans, and ultimately, patient harm. Additionally, the lack of transparency in AI decisionmaking processes, often referred to as the ‘black box’ problem, poses significant challenges in understanding and trust.” The document also cites risks to data privacy and security, and the need to balance the vast amounts of data required to develop AI, with care to protect patients and patient care processes.
So why is all of this significant? It’s clear that the leaders of the Joint Commission, whose bedrock role remains the monitoring of patient safety in hospitals and other patient care organizations, will continue forward. Indeed, the stated mission of the Joint Commission remains “to continuously improve health care for the public, in collaboration with other stakeholders, by evaluating health care organizations and inspiring them to excel in providing safe and effective care of the highest quality and value.” At the same time, clearly, the Joint Commission’s leaders are seeing for themselves how complex and multi-layered the idea of patient safety is becoming, as the U.S. healthcare system moves towards functional transformation during what is becoming the artificial intelligence age.
To be clear, beyond the hype, artificial intelligence means, simply advanced data analytics; and the concept of data analytics has been around for a very long time indeed in U.S. healthcare. But what is also true is that artificial intelligence has the potential to qualitatively transform both patient care organization operations and aspects of clinical care itself, as it reorders what people do and how they do it.
We’re already seeing that play out in even the earliest steps forward, particularly in note and documentation preparation for clinicians. Physicians and nurses are increasingly using generative AI tools to streamline what has been a somewhat burdensome set of tasks—responding to patient inquiries and documenting aspects of the patient record—in ways that not only save the clinicians time, but also augment their cognitive efficiency. And that matters tremendously at a time when clinicians are more time-burdened and time-stressed than ever.
Meanwhile, both algorithmic and generative AI tools are going to transform diagnostics in certain key areas; they are already doing so for radiologists. There are many pitfalls involved in the use of AI tools for diagnostics in particular; but our reporting here at Healthcare Innovation is confirming that patient care organization leaders, including clinician and health IT leaders, are moving forward very thoughtfully as they advance in the diagnostics area.
And of course, ambient intelligence in the patient visit area is already becoming table stakes for patient care organizations trying to help their physicians cut down on their documentation time—a key area of dissatisfaction for so many.
What will be really fascinating will be the path forward in agentic AI, which right now remains rather cloudy and uncertain. But the technology is beginning to advance even in that groundbreaking area, and ultimately, as we’ve seen over and over in the past, when technological tools are made available, they will be used and advanced. The announcement from the Joint Commission and CHAI notably referenced “the potential for AI errors, which could arise from algorithmic biases, data inaccuracies, or unforeseen interactions within the healthcare environment”—with the potential for “misdiagnoses, inappropriate treatment plans, and ultimately, patient harm,” as a result.
And that seems clearly to be a “sweet spot” of focus for the Joint Commission’s leaders, as “patient harm” remains an appropriately absolute ongoing focus for the agency.
It seems clear that the Joint Commission and CHAI leaders are being very thoughtful indeed, as they work together to navigate this uncharted territory. There really has been nothing like what artificial intelligence is, in patient care delivery or hospital and healthcare operations, until now. And it makes sense that these two organizations would want to team up to help develop practical guidance that makes sense and speaks to the real needs, on the ground, of leaders nationwide. One can only wish these leaders the very best as they work to chart uncharted territory in this new world that we all find ourselves in. And it will be fascinating even a year from now to see how all of this evolves forward. This is indeed a new chapter for our healthcare system, and one that can use all the collaborative help it can get.

