Goodizz.

Researchers Warn North Korean Hackers Used ChatGPT to Forge Fake Military IDs

Internet Trends

September 15, 2025
A North Korean hacking group allegedly used ChatGPT to forge South Korean military ID cards in phishing campaigns. Here’s how AI misuse is reshaping cyber threats.
Researchers Warn North Korean Hackers Used ChatGPT to Forge Fake Military IDs

North Korean Hackers Misuse ChatGPT for Fake Military IDs

A new cybersecurity report has uncovered that a suspected North Korean hacking group used ChatGPT to generate a deepfake version of a South Korean military ID card. The fake identification was reportedly deployed in phishing attempts designed to look more credible, tricking recipients into downloading malware that could steal sensitive information.

The group, known as Kimsuky, has long been linked to espionage campaigns targeting South Korean institutions. U.S. Homeland Security officials have also identified the group as part of North Korea’s broader strategy to collect global intelligence and bypass international sanctions.

How the AI Exploit Worked

According to researchers at South Korean cybersecurity firm Genians, the hackers manipulated ChatGPT into bypassing restrictions against creating government IDs. By tweaking their prompts, they were able to exploit the system and generate fraudulent documents.

Mun Chong-hyun, director at Genians, explained that this is part of a broader pattern: cybercriminals are now using AI to streamline their operations — from planning attacks and writing malicious code to impersonating legitimate recruiters online.

Growing Trend: Hackers Turning to AI

This incident isn’t isolated. In recent months, cybersecurity experts have flagged multiple examples of AI misuse linked to North Korean actors:

  • Fake identities for job scams: Reports suggest hackers used Anthropic’s Claude Code AI tool to help secure remote jobs at U.S. Fortune 500 companies. The system reportedly assisted in generating resumes, cover letters, and even completing technical tasks once hired.
  • Recruitment impersonation: Earlier this year, researchers found AI-generated resumes and social media content being used to impersonate recruiters.
  • Phishing attacks: Emails targeting South Korean journalists, researchers, and human rights activists were disguised with domains that looked like official military addresses, raising the risk of successful breaches.

Why It Matters

North Korea has already been accused of cryptocurrency theft, IT contracting fraud, and cyber-espionage operations to generate revenue and support its nuclear program. The integration of AI-driven tools into these schemes marks a dangerous evolution, making scams more convincing and attacks more efficient.

Cybersecurity experts stress that governments and tech companies must work together to improve safeguards against AI misuse while raising awareness among potential targets of phishing campaigns.

Want to stay updated on AI security risks? You might also like:

End of Article

Up Next

Internet Trends

Facebook’s $725 Million Privacy Settlement: Who Gets Paid, How Much, and What to Expect

September 12, 2025
Facebook’s $725 Million Privacy Settlement: Who Gets Paid, How Much, and What to Expect
Facebook’s $725 million privacy settlement payouts have begun. Learn who qualifies, how much users are getting, and the full payment timeline.
Read More

Up Next

Internet Trends

FBI Warns iPhone & Android Users About New Scam

September 10, 2025
FBI Warns iPhone & Android Users About New Scam
The FBI warns of rising smishing scams in 2025, from fake toll payment texts to QR code “brushing” packages. Learn how these scams work and how to protect yourself.
Read More