Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

One-Fifth of Primary Care Doctors Use AI for Their Clinical Work: Survey

A survey of primary care physicians in the UK found that over 20 percent use artificial intelligence (AI) for diagnosis and treatment suggestions.
ChatGPT, which was launched in November 2022, was the most popular tool among doctors.
“Preliminary evidence suggests the impressive abilities of these tools to assist with writing empathic documentation,” the authors wrote, “However these tools also carry limitations. They are prone to creating erroneous information.”
Over 1,000 physicians were surveyed. Around 30 percent used it for generating documentation after patient appointments and 28 and 25 percent used it for suggesting diagnosis or treatment options respectively.
Charlotte Blease, the lead author of the study and associate professor in the department of women’s and children’s health at Uppsala University in Sweden, said she was surprised by the proportion of primary care doctors using AI in their work.
When asked about AI use in the United States, Blease told The Epoch Times that her guess is that American “doctors may be using these tools in greater numbers than we suspect.”
“My instinct is that doctors, like workers and students in other fields and domains, may be using these tools surreptitiously,” she said.
The findings were published Tuesday in BMJ Health and Care Informatics.
The survey is the largest research conducted into doctors using generative AI, which is capable of creating content, in clinical practice, the authors say.
Following the launch of ChatGPT at the end of 2022, interest in large language model-powered chatbots has soared, and attention has increasingly focused on the clinical potential of these tools.
Their findings indicate that doctors “may derive value from these tools, particularly with administrative tasks and to support clinical reasoning. However, we caution that these tools have limitations since they can embed subtle errors and biases,” the authors wrote.
“It is not clear how the internet companies behind generative AI use the information they gather,” the authors write.
Blease said that doctors can use medical-grade generative AI that is HIPAA compliant for patient privacy, though these AIs still carry the risk of generating mistakes and biased recommendations.
Doctors would interview patients on their disease history and ChatGPT would document the patient’s history. The authors found that 36 percent of the documents had erroneous information.
“Further research is needed to investigate doctors’ adoption of generative AI and how best to implement these tools safely and securely into clinical practice,” the authors say, adding that there is currently no guidance on how doctors should use these tools.
“The medical community will need to find ways to both educate physicians and trainees about the potential benefits of these tools in summarising information but also the risks.”
Blease said that it is unlikely that routine use of generative AI will negatively reflect on the competency of doctors.
“If they are using these tools to brainstorm in clinical decisions or as second opinions, this may be a particularly valuable use of these tools right now.”

en_USEnglish