
Jesse Ehrenfeld, MD, MPH, co-chair of AAMI’s Artificial Intelligence (AI) Standards Committee, assumed the presidency of the American Medical Association (AMA) in June. In this Q&A, Ehrenfeld explains recent AMA advocacy on issues related to augmented intelligence, the association’s preferred term.
Q: The AMA House of Delegates adopted a policy in June that calls for more regulatory oversight of insurers’ use of AI in reviewing patient claims and in prior authorization requests for health services. What challenges is this new policy trying to address?
Ehrenfeld: Let me start by saying that this year, leading that Recovery Plan for America’s Physicians is among my top priorities. Our work around improving telehealth, digital health, and that new policy came from growing concerns that health insurers are increasingly relying on AI in their prior authorization and claims adjudication processes. The challenge is that when used inappropriately, AI can lead to improper denials and ultimately prevent patient access to medically necessary care. Our new policy is straightforward. It calls for health plans using AI technology to use a thorough and fair process that’s based on clinical criteria and keeps humans in the loop – including reviews by doctors and other health care professionals who have the right expertise for the service under review.
Q: The AMA also has communicated with the Office of the National Coordinator (ONC) for Health Information Technology about its proposed rule, Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1). What are the key takeaways from this?
Ehrenfeld: Since 2019, we have urged ONC to make sure that certified EHRs [electronic health records] can protect sensitive health information. We support efforts to promote the sharing of health information, but the current regulatory policies and EHR technology stack are insufficient to ensure that patients, their caregivers, or physicians can direct how sensitive health information is shared. So, we’re asking ONC to correct this failure of EHR technology and policy by updating the ONC certification requirements and information blocking regulations in terms of AI connected to an EHR. Physicians currently are provided very little information about how the AI uses patient medical information or how the AI was trained or develops. Many physicians are or will soon encounter AI embedded within their EHR workflows. When asked, physicians and our members tell us that they need clear and meaningful information on things like:
- What is the health care AI’s clinical effectiveness?
- What is its validity?
- How do we know that it’s safe to use?
- What are the limitations?
- What are the data privacy protections embedded in the technology?
- How have considerations around bias been handled?
Very little AI transparency is required for tools and technologies that are not already regulated by the FDA and unregulated AI products are becoming more and more commonplace in health care settings. So, we support ONC’s AI transparency policies, which would require EHR vendors to provide physicians access to information about the AI that’s being used in conjunction with patient medical records and the EHR.
Q: The AMA’s communication to the ONC states that these issues put physicians, rather than AI vendors, at risk, correct?
Ehrenfeld: Correct. We have seen what we think are inappropriate proposals through rulemaking that would place the liability for the output of an algorithm solely with the end user, i.e., the physician. We think that liability ought to be most appropriately placed with the individuals who are best suited to mitigate the risk. And in many cases, that’s not going to be the end user. It’s the developer, the software vendor, the implementer, or the entity that bought or licensed the technology. And again, in our health care system today, that often will not be the physician. So, to solely place all the liability when there is a problem with the output of an algorithm on the end user seems misguided.
Q: Do you think that the unsettled policy environment is holding physicians back from adopting AI technologies?
Ehrenfeld: I think it is too early to tell what exactly is going to happen. I’m an optimist. I have a background in clinical informatics. I’m board-certified in informatics. So, I recognize the power of these tools and I just want to make sure that we’re doing everything we can at the AMA to reduce the barriers to adoption and ensure that they’re working for patients and physicians.
Q: You co-authored an editorial for the Journal of Medical Systems on AI in medicine and ChatGPT. What inspired that?
Ehrenfeld: Well, there’s a lot of misunderstanding about ChatGPT, what large language models are and are not. And I think there is this sentiment in certain parts of society that AI tools will replace physicians. And I think that’s the wrong approach. The AMA really tries to push the concept of augmented intelligence, not artificial intelligence. And where I see the real exciting opportunities are to use these tools to detether physicians from their computers to bring more time and attention back to our patients, which drives patient satisfaction and drives and restores the joy in the practice of medicine for physicians. There are so many administrative tasks that I know these tools will be helpful for, but focusing on how we replace clinical decision making doesn’t seem to be the right place to start. Rather, simplification of processes, added efficiencies through the use of emerging tools, is I think where the money’s going to be.
Q: Is there anything about AI you want to communicate to the AAMI community of health technology manufacturers and healthcare technology managers?
Ehrenfeld: Just getting back to the liability issue, you might imagine why a manufacturer or software developer would be enthusiastic about having liability for a device, an algorithm software, placed solely with the end user. But I’ll tell you, a lot of companies that we interact with and trade associations share a perspective that if we do erroneously place liability solely with the end user, that will kill the market and that there will be a reluctance to use and purchase these products. So, making sure that we get that liability question addressed and answered in a rational way, I think there’s a need and I think we’re seeing more alignment with our perspective and others in the health care ecosystem than we were initially.
For more information, visit aami.org.
