University Research – Will patients accept AI brain scans? – UoA

0
1
Source: University of Auckland (UoA)

AI-powered MRIs could cut the potentially toxic dyes used in traditional MRIs by up to 80 percent, but will patients embrace AI-driven brain scans?

Getting a brain scan is not only a nerve-racking, claustrophobic experience, it also involves the use of chemical agents like gadolinium-based contrast dyes.

While these dies enhance the clarity of MRI images, they can cause toxicity in the body.

Recent advances in artificial intelligence (AI) are poised to reduce the reliance on gadolinium-based agents, and AI-driven MRI scans might soon offer a safer alternative for patients.

University of Auckland Business School academic Dr Farkhondeh Hassandoust says as AI begins to match and, in some instances, surpass human capabilities in tasks such as image analysis, the subtleties of AI implementation need to be explored.

“I became interested in patient’s perceptions of AI in radiology after learning about a Sydney-based startup, DeepMeds, which uses AI to generate MRI images with significantly less contrast dye.

“Learning about what they were doing inspired our study,” she says.

The researcher says the trajectory of this kind of AI-driven technology in healthcare is not only determined by its technical efficacy, but also people’s perceptions and attitudes.

“We wanted to know if patients would accept and trust AI imaging tools over more tried and tested methods. We were keen to find out how their understanding of the technology, including how it works, as well as any risks, benefits and other features, might influence their openness to AI-driven MRIs.”

Hassandoust, together with Saeed Akhlaghpour and Javad Pool (University of Queensland) and Roxana Ologeanu-Taddei (TBS Education), surveyed 619 participants to uncover the factors influencing people’s acceptance of AI in MRI scans.

Their findings highlight the importance of transparency and communication – specifically, the concept of AI explainability.

‘Explainable AI’ is artificial intelligence that’s programmed to describe its purpose, rationale and decision-making process in a way that the average person can understand. In the context of MRI scans, this might mean showing how an AI system analyses an image and arrives at a particular diagnosis.

The study shows that explainability plays a pivotal role in building trust, regardless of a patient’s health condition. Whether facing a cancer diagnosis or seeking answers for a minor issue like sinus congestion, participants preferred AI systems that could demystify their recommendations.

One respondent said: “I happen to know, already, that readings of various scans (MRI, CT, x-ray) are already rather unreliable. Show the same film to 100 radiologists, get at least 10 different answers… What’s just a shadow to one is definitely something to worry about to another…”.

In contrast, explainable AI was seen as safer, more consistent and accurate.

Another participant said: “I would choose it over traditional MRI because of the dye they put in your body. I’ve had it injected into my body and it’s not a good experience. It makes me get very flushed and hot, then I just feel bad afterwards. Then they tell you to drink a lot of liquids to get the stuff out of your body.”

Participants also highlighted the potential benefits of AI-driven MRIs and barriers such as insurance coverage. One of the participants said: “I feel with the new AI technology I would be getting the best treatment that was fully detailed and thorough throughout my MRI.”

Another said: “I think it could save me money if problems are detected earlier. I’m always concerned about a health problem getting too expensive. It would also potentially spare me from some side effects of traditional MRI.”

The use of AI in healthcare, particularly radiology, extends beyond MRIs. In 2023, nearly 80 percent of AI-enabled medical applications approved by the FDA were in radiology, a field Hassandoust says is well-suited to AI’s strengths in pattern recognition and image enhancement.

“Unlike what we call ‘black box’ systems, like ChatGPT, which don’t explain how they work, explainable AI can help patients, clinicians and radiologists better understand and gain confidence in these emerging technologies.

“These tools can enhance diagnostic precision, address workforce shortages and reduce healthcare costs.”

MIL OSI

Previous articleHealth experts push for change in vital lung disease test
Next articleInnovation – Digging for treasure: AI tool to help scientists access soil research under development