what are the limits in healthcare?

what are the limits in healthcare?


AI has worked its way into many industries over the past decade with no end in sight. The world of healthcare has been no exception, but it has been one of the spaces in which public reception to AI’s implementation has been the most hesitant.

Research by the US Pew Research Centre found the public generally split on the issue, with 60% of those surveyed as part of a nationwide study stating they would be somewhat or very uncomfortable if their healthcare provider were to rely on technology for jobs such as diagnoses and treatment recommendations.

The survey also found that only 38% of Americans believe the use of AI in healthcare would lead to better outcomes whilst only 33% thought it would make it worse, the rest were ambivalent or didn’t know.

Despite concerns, the global healthcare industry has pushed ahead when it comes to implementing the technology, from patient medical records in hospital management to drug discovery and surgical robotics. In the field of medical devices alone research by GlobalData estimates that the market is set to be worth $477.6bn by 2030.

If AI will become ubiquitous and involved in some of the most important decisions in a person’s life, what is the appropriate moral or ethical set of rules for it to which it adheres? What are the ethical upper limits of AI and where does it become unethical to implement the technology?

To find out more, Medical Device Network sat down with David Leslie, director of ethics and responsible innovation research at The UK’s Alan Turing Institute, to understand what set of rules AI should be applied in healthcare.

This interview has been edited for length and clarity.

David Leslie (DL): So, I started to really think more deeply about this during the Covid-19 pandemic because the Turing Institute was working on a project that was supposed to be a rapid response, data scientific approach to asking and answering all kinds of medical questions or biomedical questions about the disease.

I got AI to write an article in the Harvard Data Science Review at the time called “Tackling Covid-19 through responsible AI innovation” and it went through some of the deeper issues around biases and the big picture issues and how these were manifesting in the pandemic. It was called – does AI stand for augmenting inequality in the Covid-19 era of healthcare?

Off the back of that, I was asked by the Department of Health and Social Care (DHSC) to support a rapid review into HealthEquity in AI-enabled medical devices.



Source link