Artificial intelligence (AI) has permeated many aspects of medicine, with promises of accurate diagnoses, better management decisions, and improved outcomes for both patients and the healthcare system. However, to successfully implement AI technology in clinical practice, trust and acceptance among healthcare providers to use such tools is crucial.
Now, using the treatment of digestive diseases as a case study, an international study led by Nanyang Technological University, Singapore (NTU Singapore) has found that doctors in the gastroenterology practice generally trust and accept AI medical tools.
Through surveying 165 gastroenterologists and gastrointestinal surgeons in the Asia-Pacific region, the NTU Singapore-led research team found that eight in 10 say they accept and trust the use of AI-powered tools in diagnosing and assessing colorectal polyps (benign growths in the colon that could become cancerous).
When it came to using AI to guide an endoscopist on whether to remove polyps found in the bowel of those undergoing screening colonoscopy, seven in 10 said they accept and trust this AI-assisted application. (See Notes to Editor below for more data).
The research team found no difference in levels of acceptance between the male and female doctors, between those working in public and private settings, as well as between those working in big hospital units and small group practices.
However, the number of years of experience was a crucial factor. While one would expect young doctors to be more receptive to using technology in clinical decision making, the study found that gastroenterologists with fewer than 10 years of clinical experience perceived a higher risk of these AI-powered medical tools than their colleagues with more than 10 years of experience.
The findings, published in the scientific journal JMIR AI in March, highlight the need for more research into what influences doctors’ acceptance of AI in their medical practice, said the team of scientists from Singapore, China, Hong Kong, and Taiwan.
Assistant Professor Wilson Goh from the NTU Lee Kong Chian School of Medicine (LKCMedicine), who led the study, said: “For this study, we zoomed in on the use of AI in the context of gastroenterology because we see that this specialty, with its heavy usage of image-based diagnosis and surgical or endoscopic intervention, will be able to readily use AI technologies in clinical management. This is one of the earliest reports of AI risk perception, acceptance, and trust among gastroenterologists, with a unique focus on the Asia-Pacific region.”
Although the study participants found certain AI technologies risky, most practitioners still trusted and accepted these applications, highlighting the intricate relationship between the complexity of AI technologies and their acceptance. AI has the potential to revolutionize the healthcare sector, but for AI to be integrated into the sector, a better understanding of the factors that underpin clinicians’ trust and acceptance towards AI-powered medical tools is necessary.”
Asst Prof Wilson Goh, Co-Director of NTU’s Centre for Biomedical Informatics
NTU Senior Vice President (Health & Life Sciences) and Director of Centre for AI in Medicine Professor Joseph Sung, one of the study’s co-authors and a leading gastroenterologist, said: “The finding that more experienced gastroenterologists have a lower risk perception of AI tools is intriguing. Having more clinical experience in managing colorectal polyps among senior gastroenterologists may have given these clinicians greater confidence in their medical expertise and practice, thus generating more confidence in exercise clinical discretion when new technologies are introduced.”
On the other hand, a general lack of confidence when there is a discrepancy between AI and the human experience may be one of the reasons why less experienced doctors perceive AI as riskier when it involves invasive operative procedures such as removal of polyps in the colon, said Professor Sung, who is also the Dean of NTU’s LKCMedicine.
“A greater emphasis on AI, as we have implemented in NTU LKCMedicine’s recently refreshed curriculum, may help mitigate risk aversion and promote responsible AI use in clinical practice,” he added.
Professor May O. Lwin, Chair of the NTU Wee Kim Wee School of Communication, another study co-author whose research interest is in health communication, suggested that future studies could include patients’ perspective by assessing the circumstances in which patients would have concerns using AI technology in their health conditions.
She added: “It is important to capture the perspectives of other stakeholders, such as nurses, endoscopy assistants, and the general public, to understand better how their opinions align or conflict with each other. This will help us more realistically navigate complex trust and acceptance issues and create valuable propositions and effective policies.”
How the study was conducted
For this study, the scientists developed a questionnaire based on questions or statements adapted from validated frameworks and models.
Participants were asked to rate their level of agreement with the questions or statements designed to assess their level of trust, acceptance, and risk perception of AI use in gastroenterology.
Participants were also presented with three different medical scenarios in which AI could be applied:
- For detection: to assist in identifying the presence of colorectal polyps and improving the detection rate of polyps that are likely to turn into cancer
- For characterization: to assess the nature of pathology of polyps and predict the risk of a colorectal polyp turning cancerous
- For intervention: to guide the removal of polyps in an endoscopy
For each medical scenario, the participants were asked to rate their agreement with statements that assess their perceived risk and trust in AI tools, including :
- I expect major risks involved with the AI diagnosis.
- I am ready to try the method myself.
They were also asked to rate their belief in statements that assess their acceptance of AI tools, such as:
- Do you believe that machine learning algorithm can, in some cases (such as the ones described above), perform better than human beings?
The scores for each participant were then tabulated and used in statistical analyses to find out how the factors of risk perception, acceptance, and trust may interact with each other.
Source:
Nanyang Technological University
Journal reference:
Wen, W., et al. (2023). Risk Perception, Acceptance, and Trust of Using Artificial Intelligence in Gastroenterology Practice: Survey from the Asia Pacific Region. JMIR AI. doi.org/10.2196/50525.
Read the full article here