AI in healthcare

Artificial Intelligence in Healthcare?

On May 16, 2023, the World Health Organization (WHO) promulgated a warning on the risks of AI in public healthcare systems. LLMs – Large Language Models – like ChatGPT and Bard can provide information which is biased. In addition, there is a chance of the diagnosed problem being incorrect. Additionally, the models may be rigged, hence generating disinformation2. This comes shortly after OpenAI – the developer of the model GPT-4 – proclaimed that ChatGPT-4 has passed multiple medical exams.

While the WHO is keen on AI’s potential and its ability to expand and enhance the healthcare system, the WHO feels the risks and issues aren’t being examined as thoroughly as they normally are for new technology1. It said in its statement, that “This includes widespread adherence to key values of transparency, inclusion, public engagement, expert supervision and rigorous evaluation.”3 The main risk that was stated was how fast the software had been accepted. Within 4 months, Chat-GPT was one of the most rapidly growing applications. In short, the WHO believes that a risk examination is necessary and AI may not be everything it seems for the healthcare system at the moment.


Leave a Reply

Your email address will not be published. Required fields are marked *