AI could make it harder to establish blame for medical failings, experts say | Artificial intelligence (AI)

Experts warn that the use of artificial intelligence in healthcare could create a legally complex blame game when it comes to establishing liability for medical errors.

The development of artificial intelligence for clinical use is advancing rapidly, with researchers creating a variety of tools, from algorithms to help interpret scans to systems that can help with diagnosis. AI is also being developed to help manage hospitals, from bed optimization to supply chain management.

But while experts say the technology could bring enormous benefits to healthcare, they say there are also concerns, from a lack of testing of the effectiveness of AI tools to questions about who is responsible if a patient tests negative.

Professor Derek Angus of the University of Pittsburgh said: “There will definitely be times when there is a feeling that something has gone wrong and people will look around to blame someone.”

Jama's AI Summit, organized last year by the Journal of the American Medical Association, brought together a host of experts, including doctors, technology companies, regulators, insurers, ethicists, lawyers and economists.

final reportof which Angus is the first author, not only examines the nature of artificial intelligence tools and the healthcare fields in which they are used, but also explores the problems they pose, including legal issues.

Professor Glenn Cohen of Harvard Law School, a co-author of the report, said patients may have difficulty demonstrating deficiencies in the use or design of an artificial intelligence product. There may be barriers to obtaining information about its inner workings, and it may be difficult to propose a reasonable alternative product design or prove that a poor outcome was caused by an artificial intelligence system.

He said: “The interactions between the parties can also create problems when bringing a claim – they may point to each other as the guilty party, and they may have an agreement in place to allocate responsibilities under the contract or make claims for damages.”

Professor Michelle Mello, another author of the report from Stanford Law School, said the courts are well equipped to deal with legal issues. “The problem is that this takes time and will introduce inconsistencies early on, and that uncertainty increases costs for everyone in the innovation and AI ecosystem,” she said.

The report also raises concerns about how artificial intelligence tools are evaluated, noting that many are outside the oversight of regulators such as the US Food and Drug Administration (FDA).

Angus said: “For doctors, effectiveness usually means improved health outcomes, but there is no guarantee that the regulator will require evidence.” [of that]. Then, once it is available, AI tools can be deployed in a variety of unpredictable ways in different clinical settings, with different types of patients, with users at different skill levels. There is very little guarantee that what seems like a good idea in a pre-approval package is actually what you will get in practice.”

The report notes that there are currently many barriers to evaluating AI tools, including that they often require clinical use to be fully evaluated and that current evaluation approaches are expensive and cumbersome.

Angus said it was important to secure funding to properly evaluate the effectiveness of artificial intelligence tools in healthcare, with investment in digital infrastructure a key area. “One of the things that was discussed during the summit was [that] the tools that scored the best were the ones that were implemented the least. The tools that were most widely used were the least valued.”

Leave a Comment