Learn languages naturally with fresh, real content!

tap to translate recording

Explore By Region

flag Since early 2024, hackers have used AI to boost phishing, scams, and influence ops, targeting Taiwan, U.S. schools, and Chinese critics, but no new threats have emerged.

flag OpenAI’s latest threat report reveals that since early 2024, malicious actors—including scammers and state-backed groups—have increasingly used AI tools like ChatGPT, Claude, and DeepSeek to enhance existing cybercrime tactics, such as phishing, scam content creation, and influence operations, rather than developing new ones. flag The company disrupted over 40 malicious networks, with attacks targeting Taiwan’s semiconductor sector, U.S. academic institutions, and critics of the Chinese Communist Party, often using AI to automate messages, build fake investment sites, and fabricate financial advisor personas. flag While AI improves efficiency and scale, no fundamentally new threats have emerged. flag Some actors are adapting by removing stylistic markers like em dashes to evade detection. flag Notably, AI is now used three times more often to detect scams than to create them, highlighting its dual role in both enabling and combating cybercrime. flag OpenAI’s safeguards continue to block clearly malicious requests.

10 Articles