MIT study reveals large AI language models exhibit left-leaning bias despite objective training data.

Researchers at MIT found that large language models, used in AI applications like ChatGPT, can exhibit a left-leaning political bias even when trained on objective information. The study, led by PhD candidate Suyash Fulay and Research Scientist Jad Kabbara, showed that bias persisted despite using supposedly truthful datasets, raising concerns about the reliability and potential misuse of these models.

3 months ago
3 Articles

Further Reading