Large Language Models (LLMs) are used in chatbots and have been found to have a high likelihood of generating false information. Researchers at the Oxford Internet Institute warn that these AI hallucinations are dangerous and pose a direct threat to science and scientific truth. According to their paper published in Nature Human Behaviour, LLMs are designed to produce helpful and convincing responses without any guarantees regarding their accuracy or alignment with fact.
The models are treated as sources of knowledge and are used to provide information in response to questions or prompts. However, the data they are trained on may not always be accurate. This is because online sources, which the models use, can sometimes contain false statements, opinions, and inaccurate information. The researchers caution that people often anthropomorphize LLMs and trust them as a human-like information source, regardless of their accuracy.
Information accuracy is vital when it comes to science and education. The researchers urge the scientific community to use LLMs as “zero-shot translators,” meaning they should provide the model with the appropriate data and ask it to transform it into a conclusion or code, instead of relying on the model itself as a source of knowledge. This can help ensure that the output is factually correct and aligns with the provided input.
The Oxford professors believe that LLMs will undoubtedly assist with scientific workflows. However, they stress the importance of using them responsibly and maintaining clear expectations for how they can contribute to scientific research.