Scarier Than Reality: 5 Scenarios How AI Deepfakes Infiltrating the Iran War Are Shaking Military Decisions and Threatening Korea's June 3 Local Elections
On day 5 of the Iran war, controversy erupted over AI-generated fake footage of an Iranian family evacuation influencing U.S. military operational judgments. Simultaneously reported by Dong-A Ilbo and BBC Korea, the 'AI information war' is converging with Korea's June 3 local elections, making deepfake threats a reality.

One-line hook: AI-generated fake footage infiltrated actual U.S. military operational decisions on the Iran battlefield — and Korea's June 3 local elections are just 90 days away.
TL;DR
- On day 5 of the U.S.-Iran war (March 3, 2026), footage of an Iranian family evacuation was revealed to be AI-generated, causing confusion in U.S. military assessments of civilian casualties.
- AI-synthesized images of a B-2 bomber over Tehran, deepfake videos of Iranians declaring "love for Israel," and more — both sides are using AI deepfakes as tools of information warfare.
- BBC Korea and Dong-A Ilbo simultaneously reported on March 3, sharply elevating awareness of AI information warfare in Korea.
- Korea faces a double threat ahead of the June 3 local elections (90 days away) with a sweeping ban on AI deepfake campaigning imminent.
- A Gachon University research team published a deepfake detection AI technology in an international academic journal on March 3, marking the formal start of counter-technology development.
🔍 The Facts: AI Deepfake Controversy on the Iran Battlefield
On March 3 — day 5 of the war sparked by U.S. and Israeli airstrikes on Iran beginning February 28, 2026 — an entirely new front opened: AI-generated war footage.
Dong-A Ilbo reported at 3:41 PM on March 3: "Fake Controversy Over Iranian Family Evacuation Footage Reaches U.S. Military Operations… AI Has Infiltrated the War." AI-generated fake footage circulated as U.S. forces were assessing civilian evacuation situations, causing confusion in actual operational intelligence.
BBC Korea reported on the same day, detailing specific types of deepfakes:
- AI-synthesized image of a B-2 bomber over Tehran — maximizing fear of a possible attack on Iranian nuclear facilities
- Deepfake video of Iranians declaring "love for Israel" — exaggerating anti-government sentiment within Iran
- AI image of Iran shooting down an F-35 — spread directly by Iranian state media
- Iranian family evacuation footage — designed to sow confusion about civilian casualties
Notably, both Iran and Israel/the U.S. deployed AI deepfakes. The IDF also posted old missile attack footage as if it were a new strike, receiving a Community Note on X (Twitter).
📡 Spread Mechanism: Why AI Information Warfare Is Replacing War Itself
1️⃣ Asymmetry of Speed: AI Is Thousands of Times Faster Than Fact-Checkers
In past wars, disinformation was created by humans. Now, AI generates footage that looks real within seconds. AI systems like CounterCloud automate dozens of fake accounts to simultaneously spread disinformation — at a pace no single fact-checker can match.
2️⃣ X (Twitter) Emerges as the Primary Distribution Platform
According to BBC analysis, most deepfakes related to the Iran war spread on X. Users turned to X's AI chatbot Grok for verification — and the fact that Grok had most accurately predicted the date of the Iran strikes (trending in Korea on 3/3) paradoxically boosted its credibility, concentrating demand for disinformation verification there.
3️⃣ State Actors Formalizing AI Information Warfare
Microsoft exposed China's state-sponsored hacking group "Storm-1376," which had already created AI-generated news anchors to spread disinformation (footage archived on Wikimedia Commons). Iran, Russia, and China all maintain similar AI information warfare infrastructure.
🌐 Context: Korea Is in a State of Double Threat
Korea's Exposure to the Iran War AI Information War
Korea faces direct harm from the Iran war through its energy import dependency, and is simultaneously experiencing secondary damage from the AI deepfake information war. If AI-exaggerated footage of "Korea's economy paralyzed by a Hormuz blockade" and anti-American framing deepfakes from Iran flow into domestic social media, they could amplify economic fears.
June 3 Local Elections: The Biggest Risk with 90 Days to Go
The Korean government will implement a sweeping ban on deepfake campaigning starting March 5. Prime Minister Kim Min-seok already declared on February 26 that "AI deepfake fake news is the public enemy of democracy." The Ulsan Nam-gu election commission also set a precedent by referring a candidate to police for deepfake false fact publication (with a ₩5 million fine).
If the Iran war is combined with Korean political deepfakes — for example, a fake video of a specific candidate making anti-American or pro-Iran statements — the explosive potential to upend the election is real.
Gachon University's Deepfake Detection Technology: The Beginning of a Response
On March 3, a research team led by Professor Choi Chang at Gachon University's Computer Science department published an AI technology (MSG: Multimodal Semantic-Similarity Gate) capable of detecting deepfakes faster in real time in an IEEE international journal. The technology can detect deepfakes in real-time video including CCTV footage, with potential applications in both war and election contexts.
📊 5 Scenarios: Threats AI Information Warfare Poses to Korea
| # | Scenario | Trigger | Estimated Lifespan | Risk Level |
|---|---|---|---|---|
| 1 | Iran War Energy Crisis Exaggeration Deepfake | AI-synthesized Hormuz blockade footage enters Korea | One-off to half a day | ⚠️ High |
| 2 | Local Election Candidate Deepfake Video | Campaign overheating 90 days before June 3 election | 1–3 days | 🔴 Very High |
| 3 | Fake AI Government Press Conference | Forged emergency announcements on Iran war or economy | One-off | ⚠️ High |
| 4 | Stock/Financial Deepfake Manipulation | Fake statements by CEOs or ministers amid KOSPI crash | One-off | 🔴 Very High |
| 5 | North Korea's AI Information War Intervention | Exploiting Iran war chaos to amplify Korean Peninsula instability | Long-term | 🔴 Very High |
🔭 Outlook: Korea's Response Challenges in the Age of AI Information Warfare
Short-term (1–2 weeks): While the Iran war continues, AI deepfake infiltration will be at its peak. Korea's fact-checkers and media AI verification systems urgently need to be activated.
Medium-term (3 months): Until the June 3 local elections, a triple defense line of deepfake detection law, technology, and education must be operational. The key is deploying real-time detection technologies from domestic teams like Gachon University in the field.
Long-term: Korea has the capacity to possess world-class AI deepfake detection technology. A strategy to lead "K-deepfake detection" as a global standard is needed.
✅ Checklist: 5 Ways Individuals Can Spot AI Deepfakes
References
- Fake Controversy Over Iranian Family Evacuation Footage Reaches U.S. Military Operations… AI Has Infiltrated the War (Dong-A Ilbo, 2026.03.03)
- Israel-Iran Conflict: Warning Over AI Deepfake Footage Spreading Online (BBC Korea)
- Gachon University Research Team Develops AI Technology for Deepfake Detection (Herald Economy, 2026.03.03)
- Deepfake Videos Spreading Ahead of Local Elections… Government: 'Zero-Tolerance Response' (Hankyoreh, 2026.02.26)
- Morgan Stanley: 'Asian Economies Including Korea Exposed to Risk from Iran War' (Yonhap News, 2026.03.03)
Image Credit
- CounterCloud AI Disinformation Automation Diagram: Wikimedia Commons, CC0 Public Domain (Source: 2024 AI Index)