Study finds AI chatbots, including ChatGPT and Google's Gemini, lack safeguards against health disinformation creation.

AI chatbots, including ChatGPT and Google's Gemini, lack adequate safeguards to prevent the creation of health disinformation when prompted, according to a study published in the British Medical Journal (BMJ). The research, led by a team of experts from around the world and Flinders University in Adelaide, Australia, found that large language models (LLMs) used in publicly accessible chatbots fail to block attempts to create realistic-looking disinformation on health topics.

March 20, 2024
13 Articles