Accelerating trustworthy AI in radiology: scalable software for clinical users to independently validate commercial products at local sites
103,551
2024-04-01 to 2025-03-31
Collaborative R&D
In this collaborative proposal, the Aival Analysis Lab will be used to evaluate commercial radiology AI products on imaging data representative of a broad Scottish population. We will consider two clinical treatment pathways in line with NHS Scotland service and procurement priorities: stroke triage and urgent suspicion of lung cancer triage, assessing at least three commercial products in each case. Furthermore, we will develop a fully integrated platform capable of monitoring the performance of AI products in deployment at clinical sites.
The use of AI in healthcare can lead to improved outcomes and increased efficiency in healthcare systems. However, AI models are known to produce variable results when applied to different demographics or when changes are made to a given clinical workflow, causing their accuracy to decrease and reducing operator trust.
Our AI model evaluation and monitoring tool provides a way of assessing whether AI models are operating as designed in new deployments. This then accelerates integration into new clinical sites as well as confirming that the desired performance is both initially achieved, and maintained over time. Ensuring that AI models behave as expected means that overall operational costs are reduced for both the vendor and healthcare user as any deviations in performance are found and addressed quickly.
Our software provides the ability to assess the performance of different AI models on a given patient population. It can be deployed locally at any clinical site and used to report evidence of performance, fairness and robustness on data from local populations, operators and workflows. We report metrics across subgroups of the population, ensuring that benefits and risks are equally distributed.
Our software can explain the reasoning behind the decisions of black-box AI models, providing reports with visual outputs that make this clear and accessible to users. This explainability enables clinical users to gain trust in the outputs of an AI model.
Clinical sites also often wish to benchmark different solutions for a given disease indication against one another to determine which is most suitable for their patient population and use case. Our tool provides the functionality to directly generate reports enabling this comparison.
The software is light, scalable and easy to integrate. In this project we aim to prove its efficacy in assessing and monitoring AI model performance and its ability to accelerate AI model deployment and reduce costs.
Federated AI Monitoring Service (FAMOS)
30,113
2024-02-01 to 2025-01-31
Collaborative R&D
This project aims to accelerate the integration of AI in healthcare by developing a privacy-preserving Federated AI Monitoring Service (FAMOS) on top of an existing medical AI platform. Current monitoring and evaluation costs hinder small to medium-sized enterprises (SMEs) from proving the efficacy and safety of their healthcare AI products, impeding adoption. This lack of evidence hampers widespread AI integration into healthcare systems.
The project addresses key research questions: 1) Local and privacy-preserving measurement of AI app performance for proactive hospital monitoring, and centralised visibility for stakeholders, 2) Independent validation of chosen metrics with healthcare providers and AI suppliers, and 3) Comprehensive end-to-end testing of the monitoring platform using three diverse applications across varied populations.
By implementing FAMOS on the medical AI platform, the project seeks to facilitate AI application deployment and evaluation within hospitals. It will achieve this through three main objectives: 1) Developing a monitoring service for privacy-preserving data transmission to a central dashboard, 2) Validating metrics' relevance to AI safety and trustworthiness, and 3) Enabling internal and external evaluation by hospitals and suppliers through the central dashboard.
Upon successful implementation, the project plans to scale AI evaluations using the innovative software, collaborating with an expanding network of SMEs, healthcare providers, and commissioners. Ultimately, this initiative endeavours to remove barriers, ensuring the safe and effective integration of AI in dynamic healthcare environments.
Developing tools for pre-market evaluation and post-market surveillance of Medical Imaging AI
103,981
2024-01-01 to 2024-12-31
Collaborative R&D
AI tools have shown promise in being able to analyse medical imaging and aid healthcare professionals with their image interpretation speed and accuracy. However, the uptake of these algorithms by healthcare organisations has been slow for several reasons including:
● Lack of independent clinical testing about the accuracy of the algorithms.
● Lack of training of medical staff in the use of the algorithms which leads to a lack of confidence in the use of algorithms.
● Difficulties in integration of the algorithms with existing hospital IT systems so the tools can be tried before purchase.
● Lack of trust in the real-world performance of the tools.
RAIQC Ltd has developed a web-based platform for training of medical staff in image interpretation as well as validation of imaging AI algorithms. Through the project, the company aims to further develop their platform into an end-to-end solution for training, testing and deploying AI algorithms that will:
● Allow AI developers to perform _in silico_ clinical trials to generate evidence around the efficacy, usability and health economic value of their AI tools.
● Train medical staff in the safe and appropriate use of AI in medical image interpretation
● Make it easier for hospitals to trial algorithms without the requirement of full integration with the existing hospital systems.
● Monitor the real-world performance of AI tools after they have been deployed in the clinical environments including their accuracy in different diseases, patient demographics and scanner types.
During Phase I of the project, a consortium of AI developers, NHS Trusts, clinicians and academics was put together which will work together to define the technical and clinical requirements of the platform. RAIQC Ltd will then lead the development of the pre-market evaluation and post market surveillance tools and connect them with the hospital IT systems. Once the tools have been developed, they will be used to test algorithms from AI vendors that are part of the consortium. and build
AI has the potential of revolutionising healthcare delivery in the NHS and worldwide by improving diagnostic accuracy and increasing productivity. The outputs of the project will help develop trust in AI and in turn accelerate their adoption into clinical practice.
Get notified when we’re launching.
Want fast, powerful sales prospecting for UK companies? Signup below to find out when we're live.