The World Health Organization (WHO) has recently introduced a large-scale language model tool (LLM) generated by artificial intelligence (AI) to promote and protect human well-being, safety, autonomy, and maintain public health. LLM comprises some of the most rapidly growing platforms such as Bert, ChatGPT, Bard, and several others that mimic the processing, understanding, and generation of human communication. The rapid use of these tools for health-related purposes has considerably improved access to health information and supported people’s health needs.
However, the WHO urges caution when using LLM as a decision support tool. While it can enhance diagnostic capacity and reduce health inequalities, there are also risks involved. The WHO emphasizes that proper consideration of the risks, such as transparency, inclusiveness, public engagement, professional oversight, and rigorous evaluation should be followed.
Fairness and inclusiveness must be ensured when AI is used as a decision support tool. The data used to train AI may be biased, leading to misleading or inaccurate information, posing risks to health. LLM produces authoritative and plausible responses that may be completely inaccurate or contain significant errors, especially in health-related issues. Users may also not be able to protect sensitive data, including health data, that they provide to applications to generate responses.
Furthermore, LLM may be abused to generate and disseminate compelling disinformation in the form of text, audio, and video content that is challenging to distinguish from authoritative health content. WHO is committed to harnessing new technologies such as AI and digital health to improve human health. Therefore, it is recommended to ensure protection against these risks.
The WHO proposes that these concerns should be addressed before LLM is widely used by individuals, caregivers, health system administrators, and policymakers in routine and medical practice. It is essential to follow ethical principles and good governance outlined in the WHO Guidance on Ethics and Governance of AI for Health when designing, developing, and deploying AI for Health. WHO identifies six core principles, which include protecting autonomy, promoting human well-being, human security, and public good, ensuring transparency, explainability, and comprehensibility, promoting responsibility and accountability, ensuring inclusiveness and fairness, and promoting responsive and sustainable AI.