Learn languages naturally with fresh, real content!

Popular Topics
Explore By Region
Since early 2024, hackers have used AI to boost phishing, scams, and influence ops, targeting Taiwan, U.S. schools, and Chinese critics, but no new threats have emerged.
OpenAI’s latest threat report reveals that since early 2024, malicious actors—including scammers and state-backed groups—have increasingly used AI tools like ChatGPT, Claude, and DeepSeek to enhance existing cybercrime tactics, such as phishing, scam content creation, and influence operations, rather than developing new ones.
The company disrupted over 40 malicious networks, with attacks targeting Taiwan’s semiconductor sector, U.S. academic institutions, and critics of the Chinese Communist Party, often using AI to automate messages, build fake investment sites, and fabricate financial advisor personas.
While AI improves efficiency and scale, no fundamentally new threats have emerged.
Some actors are adapting by removing stylistic markers like em dashes to evade detection.
Notably, AI is now used three times more often to detect scams than to create them, highlighting its dual role in both enabling and combating cybercrime.
OpenAI’s safeguards continue to block clearly malicious requests.
Desde principios de 2024, los hackers han utilizado la IA para impulsar el phishing, las estafas y las operaciones de influencia, apuntando a Taiwán, las escuelas estadounidenses y los críticos chinos, pero no han surgido nuevas amenazas.