![](https://hitconsultant.net/wp-content/uploads/2024/05/danielle-bitterman-800x800-1.jpeg)
What you should know:
– A new study conducted by researchers from General Brigham's Mass suggests large language models (LLM), a type of artificial intelligence (AI), could be a useful tool for doctors to streamline communication with patients. However, the study also highlights the importance of human oversight to ensure patient safety.
– The results, published in Lancet digital health, highlight the need for a measured approach to LLM implementation.
The burden of communicating with doctors
Doctors today face increasing administrative tasks, including responding to patient portal messages. This can contribute to burnout and hinder patient care. The study explored the potential of AI to help doctors write responses to patient messages.
AI generates draft answers for review
Researchers used a powerful LLM called GPT-4 to create draft answers to 100 hypothetical patient questions about cancer. Radiation oncologists then reviewed and modified these AI-generated responses.
Promising results, but caution is required
The study found that:
- Doctors perceived the AI assistance as effective.
- More than 80% of AI-generated responses were deemed safe by doctors.
- Nearly 60% of these responses required no further editing before being sent to patients.
However, the study also identified potential risks:
- A small percentage (7.1%) of unedited AI responses could mislead patients and pose health risks.
- In rare cases (0.6%), these responses could even delay essential medical care.
Importance of maintaining human oversight
Interestingly, doctors often kept the AI-generated educational content when editing answers. While this highlights the potential benefits of AI-generated patient education, the study highlights the importance of human oversight to mitigate security risks.
General Brigham's commitment to responsible AI
Mass General Brigham is committed to the responsible development and implementation of AI. They are currently running a pilot program integrating AI message writing into their electronic health records system. Future research will focus on patients' perceptions of AI-generated communication and potential biases in AI algorithms.
“Generative AI has the potential to provide the best of both worlds: reducing the burden on the clinician and better educating the patient in the process,” said the corresponding author. Danielle Bitterman, MD, a member of the faculty of Artificial Intelligence in Medicine (AIM) Program has General Brigham's Mass and a doctor from Department of Radiation Oncology has Brigham and Women's Hospital. “However, based on our team's experience working with LLMs, we are concerned about the potential risks associated with integrating LLMs into email systems. As integration of LLM into EHRs becomes increasingly common, our goal in this study was to identify relevant advantages and disadvantages.