Researchers develop tool to detect disguised hate speech online, enhancing content moderation.

Researchers at the University of Auckland have created a tool to improve online content moderation by detecting disguised hate comments. The tool identifies tactics like replacing letters with numbers or altering words and helps traditional filters spot hidden toxicity. This advancement aims to protect users, especially younger audiences, and ensure safer online environments by enhancing the detection of harmful content. Future developments may include deeper context analysis.

November 26, 2024
5 Articles

Further Reading