The healthcare landscape is rapidly evolving, with technology and innovation driving profound changes in how patient care is delivered and managed. Amid these transformative developments, Artificial Intelligence (AI) holds special status. It stands to revolutionize healthcare, much as it is doing in many other fields. But healthcare, which deals with people’s lives and wellbeing, has unique sensitivities, necessitating a careful and responsible approach.
As AI becomes increasingly integrated into healthcare processes, from clinical decision support to administrative tasks, it brings with it a set of ethical and practical considerations that demand close attention. In a recent online event hosted by Navina and athenahealth, our panel discussed these complicated considerations, and explored how healthcare organizations are already using AI to fight challenges like physician burnout, administrative burdens, and data overload. Here are some highlights from the event.
Improving data accessibility
To understand AI’s contribution to healthcare, it’s important to first understand the state of healthcare without AI. The advent of electronic health records (EHRs) brought about a paradigm shift by digitizing health information, but it also ushered in a new set of challenges, primarily the overwhelming amount of data. As Jacob Reider, former Deputy National Coordinator at the Department of Health and Human Services, put it, "the electronic health record took the problem of huge sacks of papers that needed to be carted into the office. There was no way to review those papers and data was lost. Now they're on the computer. But in many ways, those are still just huge stacks of folders and papers, just now they're electronic."
This transition from paper to electronic records opened up opportunities, yet it also underscored the pressing need for innovative solutions. The abundance of data became a double-edged sword. Navigating this sea of data efficiently is where AI emerges as a beacon of hope.
AI's role in healthcare, as Dr. Yair Lewis, Chief Medical Officer of Navina, explained, is to bring a unique layer of intelligence to the digital data landscape. "AI is adding this layer of intelligence, a layer of sense-making over that proverbial stack of manila envelopes or papers in order to make it easier for the physician to find the information they're looking for," Dr. Lewis explained. The transformative power of AI in healthcare lies in its potential to harness that data to streamline workflows, improve clinical decision-making, and, most importantly, enhance patient care.
Nurturing responsible AI development
AI is increasingly becoming a strategic tool that addresses a range of healthcare challenges, such as reducing provider burnout and streamlining documentation processes. Dr. Lewis shed light on the nuanced layers of responsibility in the development of AI in healthcare. Dr. Lewis explained, "AI can be graded between one and five levels of autonomy. So even when we're speaking about level two or three autonomy, the system is already recommending treatments." This heightened role of AI in clinical decisions comes with an immense responsibility.
One crucial aspect is ensuring that the algorithms used for clinical decision support are rigorously validated and that they provide reliable recommendations. Dr. Lewis emphasized the importance of training AI models with datasets that are representative of the diverse spectrum of society, and highlighted the need for ethical considerations in AI development. "If we don't do it responsibly, then we will be in a situation where the algorithms could be exacerbating existing biases that we're trying to eradicate."
Navigating the regulatory landscape
As the adoption of AI in healthcare accelerates, there is a pressing need to address the intricate web of regulatory considerations that accompany this rapid integration and have an important role in ensuring that AI is indeed responsible. Recent developments, such as the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence issued by the White House, show that global leaders are taking these issues seriously. Dr. Lewis called for further government involvement in the regulation of AI in healthcare, especially for systems with higher levels of autonomy and greater potential for harm. He illustrated this with a hypothetical scenario of a closed-loop insulin pump, which autonomously measures blood glucose levels and administers insulin. Such a system poses a significant risk to patient safety, making robust regulation imperative.
Navigating the regulatory landscape becomes even more complex when considering machine learning algorithms that require periodic retraining. While traditional medical devices must go through a rigorous recertification process for any modifications, machine learning algorithms continually evolve with data and retraining. "What do you do with a machine learning algorithm that is getting retrained? Now do we recertify it? This is still a work in progress, and we're learning as we go along," Dr. Lewis said.
To strike a balance between innovation and safety, regulatory involvement is crucial. Dr. Lewis clarified that AI systems providing lower levels of clinical support, such as platforms that support—but do not replace—clinical decision-making, require less regulatory attention because they do not make autonomous decisions that could result in direct complications.
Integrating AI into the workflow and reducing burnout
Ensuring that physicians and other staff members use AI solutions is not a matter of technological implementation, but of integrating and harmonizing AI with the intricacies of healthcare workflows and best practices. Dr. Lewis stressed that for AI to be successful, it must reduce the time and mental effort required from healthcare providers. "Anything that is added has to understand and augment the workflow," he explained. The aim is not to replace existing systems but to enhance them, improving efficiency and reducing the burden on healthcare professionals.
New AI tools are being introduced at a time when the demand for quality patient care is at an all-time high, with the healthcare industry facing staffing shortages, provider deficits, and the relentless pressure of value-based programs. Healthcare professionals, particularly physicians, nurses, and physician assistants, find themselves grappling with increasingly overwhelming workloads.
Joshua Frederick, President and CEO of NOMS Healthcare, highlighted how AI can address this critical issue by powering tools that can help bridge those gaps. One of the primary objectives in reducing burnout is to streamline workflows and minimize the time and effort healthcare providers need to spend on administrative and non-clinical tasks. This approach not only optimizes efficiency but also enhances job satisfaction among healthcare professionals, allowing them to focus on what they do best: patient care.
To that end, Dr. Lewis emphasized the importance of seamless integration of AI into clinical workflows. By understanding the intricate processes that healthcare providers follow and designing AI tools that align with and augment these workflows, the burden on healthcare professionals can be lightened. This alignment enables healthcare providers to work more efficiently, allocate more time to patient care, and ultimately reduce the burnout associated with the constant struggle to keep up with increasing administrative tasks.
Responsibly bolstering value-based care
Value-based care has brought a sea change in the healthcare landscape, redefining the way care is delivered and reimbursed. While patient well-being is undoubtedly the foremost priority, the financial sustainability of healthcare institutions is closely tied to their performance in value-based programs.
In value-based care, two crucial factors reign supreme: risk adjustment and quality metrics. AI can have an important contribution to both. As Frederick pointed out, "revenues come from how appropriately risk scored your patient is, and how complex your patient is. What truly is applicable to that patient population as far as diagnosis and risk scoring go? You need to prove that you risk score your patients properly, and show how well you prove your quality, how well you ingest all that data and that all those boxes are being checked."
AI empowers healthcare providers to make accurate risk assessments, ensuring precise risk scoring and, ultimately, boosting revenue from VBC programs. By streamlining administrative tasks, increasing efficiency, and assisting providers in tracking and meeting quality metrics, AI not only drives high-quality patient care but also leads to increased savings and value under VBC programs.
To watch an on-demand recording of the webinar, click here.