Artificial intelligence (AI) has immense potential to revolutionise healthcare. This is particularly true in radiology, where advanced AI algorithms can accurately detect anomalies, prioritise urgent cases and reduce the time radiologists spend on routine tasks. In fact, more than half of all of the CE marked AI devices developed between 2015 and 2020 were intended for use in diagnostic imaging.1 However, despite some limited increase in the uptake of AI for specific clinical applications, there has not been a significant rise in its routine use – either in radiology or across the broader healthcare sector – over the past four years. Several challenges contribute to this slow acceptance, including problems integrating AI into clinical workflows, a lack of trust stemming from insufficient or inadequate high quality efficacy data, the absence of universal ethics and regulatory standards in the industry, and high costs. Amied Shadmaan, Director of AI and Clinical Collaborations at GE HealthCare, joined a panel of experts in a workshop session at HIMSS24 Europe to explore the barriers to AI adoption in medical settings.
Meet the panel
Professor Evis Sala* is the Director for Diagnostic Imaging and Radiology at Gemelli University Hospital (Fondazione Policlinico Universitario Agostino Gemelli) in Rome, Italy, as well as a Professor of Radiology at the University Cattolica del Sacro Cuore. She co-founded the AI startup Lucida Medical, which has developed an MRI AI tool to detect prostate cancer.
Dr Sarim Ather* is a Consultant Radiologist and the Digital and AI lead at Oxford University Hospital, and he also works in an advisory capacity for AI technology with the Royal College of Radiologists. He co-founded RAIQC, a web-based imaging platform that simulates day-to-day practice to support radiology training and education.
Mr Mattia Fantinati is president and founder of the Internet Governance Forum of Italy, a UN-affiliated body. He is also a former undersecretary of state for public administration and a member of parliament in Italy.
Early-stage technology concerns
We are still in the early stages of understanding AI's potential in healthcare, and this raises concerns about its performance and capabilities. In the past, medical devices typically took many years to develop and adopt, but advanced software has sped up the creation of many AI tools, sometimes without enough time for thorough validation and integration. This highlights the need for irrefutable and comprehensive evidence on how AI devices affect patients, as well as clinical and health economic outcomes.
There are several key points to consider when it comes to assessing the value and impact of AI technologies.
- End-to-end measurements: AI technologies cannot be evaluated based solely on their diagnostic accuracy. Their influence across the entire care pathway must be considered to gauge the value of AI for both clinicians and patients.
- Evidence generation: many AI tools lack sufficient evidence to show that they are safe and effective, hindering acceptance by doctors and patients.2 This data shortage sparks debate over whether rigorous clinical trials or real-world data are more valuable for AI validation. Generating this evidence – regardless of the method – will require substantial time and resources.
- Patient inclusion: including patients in the development and adoption of AI ensures that these technologies deliver tangible health economic benefits.
Implementation challenges
Integrating AI into daily radiology practice also demands significant adjustments to current ways of working, making its successful adoption dependent on ongoing training and the willingness of stakeholders to integrate these innovations into established workflows. In addition, medical establishments need to have the right digital foundations in place. Investing in IT resources is critical, but this requires a reallocation of budgets and a shift in priorities within the hospital setting. Fortunately, the COVID-19 pandemic highlighted the need for improved institutional IT capabilities and cloud access to enable data sharing, which would create a framework for AI uptake. However, interoperability issues – due to the fragmented nature of healthcare systems – remain a significant hurdle. Regions, hospitals and even departments operate autonomously in many countries, leading to data silos that complicate data sharing. Consequently, data collected in one setting is often not readily available or usable in another, making the delivery of collaborative care challenging.3 Improving interconnectivity is dependent on establishing effective interoperability standards and data sharing protocols to allow the seamless and secure exchange of medical information.
Patient and user satisfaction
The usability of AI tools is also critical for their adoption in healthcare; if new technologies integrate smoothly into existing workflows and are user friendly, then they are more likely to be accepted across the board. To achieve this, startups should involve medical professionals in the development process to ensure that new technologies meet practical needs. This would also positively influence their trust and understanding of AI, which is essential for its successful implementation. Engaging junior staff, who are typically more open to new technologies, can help to promote AI acceptance.
Ensuring that AI systems enhance efficiency in clinical practice – without increasing complexity – is essential for successful adoption of these technologies, and startups should prioritise minimal disruption to current practices as a key requirement, ensuring that a new method doesn’t add extra steps to existing workflows. AI tools should also be designed with the constraints of existing legacy IT systems in mind, as these limitations can impact how well new technologies function. Ultimately, a collaborative approach between innovators and practitioners will drive adoption and improve patient and user satisfaction.
Ethical considerations
Ethical and liability issues are additional concerns in introducing AI to a healthcare setting. Although AI is viewed as a tool to augment – rather than replace – human expertise, its impact on clinical practice and patient safety needs to be carefully considered. Clinicians commonly worry about the accuracy and reliability of AI systems, the legal and professional consequences if AI makes a mistake, and how AI handles sensitive patient information. Transparency about how AI works, the data it uses, and any potential biases is crucial for gaining their confidence
The future of AI in healthcare
AI holds significant promise for enhancing healthcare, especially in a radiology setting, however, the path to fully integrating AI into clinical practice is complex and will take time. In addition, concerns about AI capabilities, ease of integration, ethical considerations and the need for robust digital infrastructure must be addressed. Achieving successful adoption of these new technologies will require ongoing collaboration, transparency and investment in both technology and training.
The GE HealthCare workshop session at HIMSS Europe 2024 is available to view here.
References
- Muehlematter, UJ, Daniore, P and Vokinger, KN. 2021. Approval of artificial intelligence and machine learning-based medical devices in the USA and Europe (2015-20): A comparative analysis. The Lancet Digital Health, 3(3):e195-203. doi:10.1016/s2589-7500(20)30292-2.
- Silcox, C et al. 2024. The potential for artificial intelligence to Transform Healthcare: Perspectives from International Health Leaders. npj Digital Medicine, 7(88). doi:10.1038/s41746-024-01097-6.
- Digital transformation in the NHS. 2020. National Audit Office. Accessed: 14th June 2024. https://www.nao.org.uk/reports/the-use-of-digital-technology-in-the-nhs/.
*Professor Evis Sala, Dr Sarim Ather, and GEHC do not have a contractual relationship beyond the fact of being a GEHC product end user. The statements by GEHC customers are based on their own opinions and on results that were achieved in the customer's unique setting.