AI in healthcare: four focus areas to promote wider clinical adoption

Artificial intelligence (AI) has burst onto the scene in healthcare in recent years, propelling new innovations that promise to improve patient care and outcomes while reducing costs. But as anyone working in the field knows, the road from AI research to clinical practice can be a rocky one – for reasons that extend well beyond the technology itself. How can we drive adoption of AI in healthcare at scale to fully deliver on its promise for patients and healthcare professionals?

From precision diagnosis to acute patient monitoring and self-management of chronic disease, AI in healthcare has shown potential to support providers and patients at every stage of the care continuum. Applications range from algorithms that augment the expertise of healthcare providers and support patient-centered decision-making, to workflow automation tools that can improve operational efficiency and free up focus for patient care. AI is also helping hospitals forecast and manage patient flow, all the way from hospital admission to discharge, enabling them to adapt to rapidly changing circumstances. And with healthcare increasingly moving into the home, AI-based insights can empower people to take care of their own health and well-being to keep them out of the hospital – while staying closely connected to healthcare professionals through remote patient monitoring.

The need for such technologies will only be felt more strongly in the years to come. Healthcare systems were already struggling to meet rising patient demand before COVID-19 hit. With global staff shortages projected to increase to 18 million by 2030 [1], healthcare is on an unsustainable trajectory if we don’t urgently rethink how and where it’s delivered. A 2021 Medscape survey revealed that 42% of healthcare professionals felt burned out, with the ongoing reverberations of the pandemic adding to the strain for many [2]. The growing administrative burden in healthcare also weighs heavily on them, pulling their attention away from what drew them to medicine in the first place: caring for patients [3]. It compels us to ask how technologies like AI can alleviate the burden for healthcare professionals and make their work more rewarding, allowing them to spend their time where it adds most value.

Yet despite encouraging first applications and a sprawling wealth of research, there are several challenges that stand in the way of wider adoption of AI in clinical practice – ranging from (lack of) workflow integration and trust, to struggles with data access and concerns about data privacy [4,5].

In an attempt to address such challenges, we have identified four enabling areas for realizing the full potential of AI in healthcare.

Let’s explore each of these four enabling areas in more detail.

1. People and experiences

The value of AI in healthcare is only as strong as the human experience it supports. AI innovation should therefore focus on the unmet needs of providers and patients first and foremost. The most beneficial AI innovations in healthcare, like any other innovation, are need-driven rather than technology-driven. They enhance the human care experience without getting in the way of it. Or as one hospital CIO put it: “Digital medicine is just medicine in the same way that really good technology is not about technology. It blends into the fabric of our everyday lives.” [6]

Human-centered design for seamless workflow integration To achieve this kind of seamless workflow integration, human-centered design is essential. This requires involving all relevant stakeholders – including end users – from the very beginning of the development process. At Philips, we use tools such as co-creation sessions, experience flows, and on-site 360-degree workflow analyses to understand the context in which AI-enabled technology is used. One consistent learning from such collaborations is that AI in healthcare should reduce information overload rather than add to it. What may seem like a useful algorithm in a research setting can actually be a burden to healthcare professionals if it means adding yet another thing to their workflow. Take radiology, for example. Radiologists work in a complex and time-pressured environment, running different software applications in parallel on multiple screens. If AI algorithms require them to manage additional applications, the net effect may be that radiologists actually spend more – not less – time processing medical images [7]. Instead, algorithms must integrate seamlessly into their workflows, offering a unified experience without requiring additional task-switching.

Developing an AI-ready workforce To prepare today’s and tomorrow’s medical workforce for a future enabled by AI and other digital technology, education is also essential. Increasingly, physicians will need to be well-versed in both biomedical and data science, with an appropriate understanding of AI’s strengths and limitations. Nurses at the bedside should also feel comfortable using AI-enabled clinical decision support systems, knowing how to get the most out of data-based insights in conjunction with their own professional experience. And they must be able to explain to patients how AI helps to inform medical decisions. National health systems should therefore prioritize AI and data science in their education curricula. Institutions such as the European Society of Radiology have rightly called for AI to be included in the curricula for future radiology residents [8]. Other specialties where AI is most likely to spawn new applications first, such as pathology and oncology, would also benefit from new education programs that incorporate the latest knowledge from healthcare practitioners, academia, and industry players. In addition, virtual collaboration can support peer-to-peer learning. At Philips, we are also promoting broader public awareness for the emerging role of AI in healthcare. For example, through the Dutch Kickstart AI consortium, we have contributed to a national course for AI aimed at the general public. Such initiatives can help to make people more familiar with and receptive to the use of AI in healthcare and personal health devices.

2. Data and technology

Developing AI-enabled solutions relies on access to high-quality data. The reality, however, is that today’s healthcare data is often locked away in disparate and disconnected systems, posing a barrier that needs to be addressed for AI in healthcare to scale. In our Future Health Index 2021 report, healthcare leaders cited difficulties with data management (44%) and lack of interoperability and data standards (37%) as the biggest obstacles to adoption of digital health technology in their hospital or healthcare facility. These challenges can also make it difficult to compile the necessary high-quality data for training AI models, particularly if those models rely on multimodal and longitudinal data from different sources.

Promoting data sharing and interoperability To overcome these challenges, robust and interconnected platform infrastructures are needed for collecting, combining and analyzing data at scale. As healthcare becomes increasingly distributed, extending from the hospital to the home, such infrastructures need to cover the entire continuum of care to connect patient data across settings. Through our cloud-based Philips HealthSuite platform, we are helping healthcare providers collect, compile, and analyze data from multiple sources, including medical records, imaging and monitoring data, as well as personal health devices. Second, interoperability and standardized data sharing between different hospitals and health systems is key to exploit the full potential of data and AI in healthcare. Data should be available in formats that can be shared effortlessly, transparently, and securely, in a way that is compliant with relevant privacy regulations. At Philips, we are promoting the use of open data standards and semantic interoperability, through methods such a unified Information Language System, to allow healthcare providers to connect and integrate data in a meaningful way. Third, regional legislation and collaboration should enable the secure exchange and access to properly annotated data for AI research and clinical practice, while safeguarding patient privacy. We therefore support initiatives such as the creation of a common European Health Data Space, which is set to promote better exchange and access to different types of health data ranging from Electronic Health Records to genomics data, across EU member states.

Pioneering next-generation AI Next-generation approaches to AI development can also help address some of the challenges around data access. For example, federated learning allows multiple healthcare institutions to gain insights through a shared AI model, without having to move patient data beyond the institutions where they reside. The machine learning process occurs locally at each participating institution. Only the characteristics of the AI model are transferred to a central cloud server. The data stays where it is. Recent research has shown that models trained by federated learning can achieve performance levels comparable to ones trained on centrally hosted data sets and superior to models based on single-institution data [9].

Where data to train AI models is scarce, we can also tap into existing science-based knowledge to help fill the gaps. For example, anatomical and physiological knowledge of the lungs or the heart can be used to create synthetic images that complement existing annotated data. Medical image segmentation models trained on such synthetically augmented data sets have shown better accuracy than models trained only on a small set of real-world data – showing the promise of a hybrid modelling approach that combines the power of data with science-based knowledge [10]. By exploring the possibilities of these and other next-generation AI methods, we will be able to take on data-related challenges in AI development with a more versatile and effective toolkit.

3. Governance and trust

To strengthen public and professional trust in AI in healthcare, technological advances need to go hand in hand with appropriate governance around data privacy, security, and the ethics of AI. When asked what would keep consumers from using digital health technology, 41% ranked “concerns about my privacy or data security” as the number one barrier [11]. Likewise, for healthcare CIOs tasked with keeping patient data safe across a growing plethora of channels and devices, data security is as big a concern as ever [12]. At Philips, we are committed to proactively addressing security and privacy concerns, as captured in our Data Principles. But AI in healthcare brings with it other risks as well, which call for additional standards and safeguards. For example, we should encourage appropriate trust in AI while preventing that physicians come to rely on it blindly, because no algorithm will ever be perfect. We must also be mindful that AI can exacerbate existing health inequalities through biased data sets that do not accurately represent the target population. Such considerations led us to develop and implement a set of guiding principles for the responsible use of AI – all based on the notion that AI should benefit healthcare providers, patients, and society as a whole, while avoiding unintended consequences such as bias. We have written about these AI Principles and the importance of fair and bias-free AI before, and refer to these articles for more detail on this vital topic.

What’s worth highlighting here is that AI is also increasingly being recognized as a force for good that can promote more fair and equitable healthcare. For example, Philips recently received a grant from the Bill & Melinda Gates Foundation to develop an AI-based application to improve the quality and accessibility of obstetric care in low- and middle-income countries. The application will be designed to help nurses identify potential problems in pregnancy at an early stage, thereby giving expecting moms a better chance of bringing a healthy child into the world. This is just one of many opportunities for AI in healthcare to make a difference where it’s needed most.

4. Partnerships and new business models

In a sector as complex as healthcare, no individual player has all the solutions. Partnerships, ecosystem integration, and new business models such as SaaS-based software marketplaces are therefore becoming increasingly important in bringing AI into clinical practice. Rolling out successful AI projects at scale requires intense collaboration between people with highly diverse backgrounds – from clinicians to data scientists, patients, hospital decision makers and IT professionals. Partnerships are the key to bringing these disciplines together. For example, as part of BigMedilytics – an EU-supported big data consortium led by Philips Research – we have been working closely with clinical partners on developing predictive modelling for prostate cancer surgery outcomes, which can support physicians and patients in their treatment decisions, for better outcomes and quality of life. Through partnerships and ecosystem integration, large healthcare solution providers like Philips can also make it easier for hospitals to embed AI applications from startups into their workflows. For example, in radiology, this can take the form of a curated software marketplace that allows radiologists to download validated apps from a large number of third-party developers via one common platform – without having to worry about point-to-point integrations. By providing such services through the cloud on a Software as a Service (SaaS) basis, AI applications can be deployed more easily and updated over time, for continuous innovation.

Finally, clear criteria for the reimbursement of AI in healthcare will also be crucial for wider adoption. Today, the financing of AI is still uncertain in many cases. Reimbursement schemes were not designed with AI in mind. A transition from fee-for-service to value-based payment models would go a long way towards creating the appropriate incentive framework for the sustainable adoption of AI in healthcare. This needs to go hand in hand with more prospective clinical studies establishing improved outcomes through the use of AI, demonstrating its value to providers, payers, and patients [13]. Clearly, a wide range of challenges remain – many of which are not about technology per se. To drive adoption of AI in healthcare at scale, we must take a much broader view, and ask ourselves how AI can best be integrated into the workflows, policies, and ecosystems that make real transformation possible. Only by addressing these enablers in a concerted way will we be able to deliver on the full promise of AI. We owe it to all the healthcare professionals and patients whose lives could be improved by it.

Read more about AI in healthcare For a further exploration of how AI can add value to patients and healthcare professionals across the continuum of care, download our position paper “How AI can enhance the human experience in healthcare”.

References [1] World Health Organization. [2] Medscape National Physician Burnout & Suicide Report 2021 [3] Scientific American. [4] Kelly, CJ, Karthikesalingam, A, Suleyman, M, Corrado, G, King, D. 2019. Key challenges for delivering clinical impact with artificial intelligence. BMC Medicine 17, 1:195. [5] Strohm, L, Hehakaya, C, Ranschaert, ER, Boon, WPC, Moors, EHM. 2020. Implementation of artificial intelligence (AI) applications in radiology: hindering and facilitating factors. European Radiology. 30, 5525–5532. [6] Marx, E & Padmanabhan, P. Healthcare Digital Transformation. 2020 [7] Kwee, TC, Kwee, RM. 2021. Workload of diagnostic radiologists in the foreseeable future based on recent scientific advances: growth expectations and role of artificial intelligence. Insights Imaging 12, 88 [8] Richardson, ML, Garwood, ER, Lee, Y, et al. 2021. Noninterpretive Uses of Artificial Intelligence in Radiology, Academic Radiology, 28(9). [9] Rieke, N, Hancox, J, Li, W, et al. 2020. The future of digital health with federated learning. NPJ Digital Medicine, 3, 119. [10] Frid-Adar, M, Diamant, I, Klang, E, et al. 2018. GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing. 321:321-331. [11] [12] [13] McKinsey and EIT Health. Transforming healthcare with AI: the impact on the workforce and organisations.


Copyright: Philips