AI can make online medical texts easier to understand
Study shows potential and limits
Advertisement
Millions of people search for health information on the internet - but many texts are difficult to read and incomprehensible for laypeople. A new study by researchers at Heilbronn University shows that artificial intelligence can help to make medical content more readable. However, the improvements are moderate and should be subject to close professional scrutiny.
Amela Miftaroski, a graduate of the Medical Informatics bachelor's degree program at Heilbronn University of Applied Sciences and lead author, conducted a study on the readability of AI-simplified online medical articles as part of her final thesis, which was published in the international journal JMIR AI. "The publication of the study illustrates how we integrate practical research into teaching in the Medical Informatics degree course at Heilbronn University," emphasizes Dr. Monika Pobiruchin. "The fact that a Bachelor's thesis leads to a publication in an international journal shows that work is already being carried out at a very high scientific level early on in the course."
The research team consisting of Amela Miftraoski, Dr. Richard Zowalla, Martin Wiesner and Dr. Monika Pobiruchin analyzed 60 online medical articles on common diseases and relevant health topics. The texts were then automatically simplified using four major language models, including ChatGPT and Microsoft Copilot. Established readability measures were used to assess how well the revised texts could be understood by people with no prior medical knowledge.
The results show: The AI models tested were able to increase readability overall. Microsoft Copilot in particular achieved significant improvements and reached the recommended intermediate level for half of the texts. ChatGPT-3.5 also delivered good results, while other models only achieved minor improvements. However, the readability level recommended by experts, comparable to the 8th grade level, was rarely achieved overall.
In this context, Amela Miftaroski draws attention to risks: "Some AI-generated texts contained inaccuracies or omitted important contextual information, which can lead to misunderstandings or misinformation in the medical environment". These inaccuracies show that AI models can quickly generate incorrect or out-of-context content without expert guidance. The method is therefore not suitable for private use, for example when laypeople want to simplify medical texts themselves using AI. Accordingly, the young researcher emphasizes: "AI can simplify the texts, but a professional review remains essential".
Despite these limitations, the authors see great potential: "Large language models could relieve the burden on healthcare facilities by creating initial text drafts that are then checked and finalized by specialists. This could make more comprehensible health information available to citizens in the long term," comments co-author Martin Wiesner.
The study thus provides important insights into how AI-supported systems can contribute to improving health literacy in the future and that caution is still required when dealing with medical questions and large language models.
Note: This article has been translated using a computer system without human intervention. LUMITOS offers these automatic translations to present a wider range of current news. Since this article has been translated with automatic translation, it is possible that it contains errors in vocabulary, syntax or grammar. The original article in German can be found here.