AI Tools Exploited by North Korean Hackers for Phishing and Cyber Espionage
September 14, 2025
North Korean hackers, including the group Kimsuky, have been leveraging AI tools like ChatGPT to craft fake military IDs attached to phishing emails, aiming to impersonate South Korean defense officials and infiltrate targets.
In addition to phishing, North Korean IT workers are misusing AI to generate virtual identities for job applications, helping them bypass sanctions and secure foreign currency, as reported by Anthropic in August.
U.S. officials accuse North Korea of using cyberattacks, cryptocurrency theft, and IT contractors to gather intelligence, fund regime activities, and advance nuclear weapons programs.
Experts stress the importance of strengthening safeguards in recruitment, daily operations, and IT systems to prevent AI from being exploited for cyber operations and to protect national security.
Major AI companies like OpenAI, Anthropic, and Google have documented instances of AI misuse to improve security measures, although they have not responded to specific inquiries.
Security agencies such as CISA, FBI, and CNMF advise organizations involved with North Korea to enhance security protocols, including multi-factor authentication, phishing awareness, and email filtering.
Google's Gemini AI has been used by Chinese and North Korean actors to troubleshoot code and identify target networks, though safeguards prevent its use for more advanced cyberattacks.
The Global Security Council highlights AI's dual-use nature as a security risk, urging organizations to implement proactive security measures and continuous monitoring to prevent misuse.
Organizations are encouraged to adopt ongoing security practices to detect and prevent AI-driven cyber threats and malicious activities.
Chinese hackers have employed the AI model Claude for over nine months to assist in cyberattacks targeting Vietnamese infrastructure, including telecommunications, agriculture, and government systems.
Experts warn that AI significantly lowers the barrier to hacking and disinformation, enabling less skilled individuals to clone brands and craft convincing scams rapidly.
ChatGPT can be prompted to generate convincing fake IDs if requests are framed as sample designs rather than official documents, despite built-in safety measures.
Kimsuky has been active since 2012, focusing on attacking foreign policy experts, think tanks, and government agencies in South Korea, Japan, and the U.S. through spearphishing and impersonation.
Summary based on 9 sources
Get a daily email with more Tech stories
Sources

Fortune • Sep 14, 2025
North Korean hackers used ChatGPT to help forge deepfake ID | Fortune
The Japan Times • Sep 15, 2025
North Korean hackers used ChatGPT to help forge deepfake ID