The 21-Year-Old Who Asked ChatGPT 'Can You Die from Sleeping Pills and Alcohol?' and Became a Murderer: Korea's First Case of AI Conversation Records as Court Evidence
In a serial drug poisoning murder case in Gangbuk, Seoul, police used suspect Kim's ChatGPT conversation history as key evidence to prove intent to kill. The question 'Can you die from mixing sleeping pills and alcohol?' was adopted as digital forensic evidence proving murder intent, sparking a major debate over the legal status of AI conversation records.
.jpg&w=3840&q=75)
Why you need to read this now: A question you asked an AI chatbot could make you a murderer. For the first time in Korea, a ChatGPT conversation history has been adopted as key evidence in a murder case.
TL;DR
- 21-year-old woman Kim spiked drinks with benzodiazepine drugs, killing 2 men and leaving 1 in critical condition
- Police digital forensics found ChatGPT query history: "Can you die from sleeping pills and alcohol?", "How much is dangerous?"
- AI conversation records adopted as evidence of 'intent,' upgrading charges from injury-causing death → murder
- Recorded as the world's first case where generative AI conversation records were used as key evidence in a murder case
- Lawyers: "Now the first thing I check when meeting a client is their ChatGPT conversation history"
What Happened
On February 11, 2026, the Seoul Metropolitan Police Agency referred 21-year-old woman Kim to prosecutors on murder charges. Kim is accused of giving benzodiazepine (psychotropic) drug-laced drinks to three men in their 20s at a motel in Suyu-dong, Gangbuk-gu, Seoul, and a café in Namyangju, Gyeonggi Province, between December 14, 2025, and February 9, 2026 — killing two of them and leaving one unconscious.
In initial questioning, Kim stated, "I did put drugs in the drinks, but I didn't know it was lethal." Police found decisive evidence to overturn this claim through smartphone digital forensics.
The Trail Left by ChatGPT
When police's Digital Forensics Investigation Unit analyzed Kim's smartphone, the following query history was found in the ChatGPT app:
- "What happens if you take sleeping pills and alcohol together?"
- "How much is dangerous?"
- "Can a person die?"
ChatGPT repeatedly warned in response to each question that "combining the two substances can be fatal." Yet Kim continued committing the crimes, according to the investigation. Based on these conversation records, police concluded that Kim had clearly been aware in advance of the lethality of combining drugs, establishing murder intent.
Why This Case Is Attracting Worldwide Attention
The Era of AI Forensics Has Begun
Existing digital investigations primarily used internet search histories, messenger content, and location data as key evidence. This case takes it a step further — it is recorded as the world's first instance where generative AI conversation records functioned as direct evidence proving criminal intent.
"Search records only show the fact that 'information was sought,' but AI conversation records expose the user's intent and thought process." — Legal expert (name withheld)
The conversational structure of AI chatbots is fundamentally different from traditional search. A single search term makes it difficult to determine intent, but the flow of questions and answers exchanged with AI reveals the user's inner logic, purpose, and plans far more clearly.
Changing Legal Practice
Korea's legal community is responding quickly. One lawyer, who requested anonymity, told the Korea Herald:
"Now, when I take on a case, the first thing I check is the client's ChatGPT conversation history."
This is expected to have sweeping impacts not just on criminal cases but across civil disputes and corporate litigation as well.
Spread Mechanism: Why This News Went Viral
| Factor | Details |
|---|---|
| Trigger | Simultaneous reporting by global media including BBC, TechRadar, People, and EFE |
| Why It Spread | Powerful headline 'ChatGPT as murder evidence' + fear resonance in an age of AI normalization |
| Domestic Coverage | Simultaneously placed on AI, legal, and social pages; real-time search trends spiked |
| International Response | Spread into AI safety and privacy debate |
Context and Background
Coinciding with Korea's AI Basic Act Implementation
This case emerged immediately after Korea became the world's first country to implement an AI Basic Act in January 2026. While the Act focuses on regulating high-risk AI systems and ensuring transparency, specific standards for using AI conversation records in investigations have yet to be established.
The Court Administration Office also began work on an 'AI Guidebook for Judges' in October 2025, with planned distribution to all judges nationwide in March 2026. The speed at which AI is entering the courtroom is outpacing institutional preparation.
Simultaneous with KBS·MBC·SBS Copyright Lawsuit Against OpenAI
On February 23, Korea's three major broadcasters filed a copyright infringement lawsuit against OpenAI. The mounting legal battles surrounding AI — moving beyond individual cases into the legal liability of AI companies — are interconnected.
Outlook: How Far Will This Go?
This issue will not end as a one-off crime story. Structurally, it is expected to lead to the following debates:
- 🔴 Misuse of 'Temporary Chat' feature: The possibility of ChatGPT's 'disable history' function being used to destroy evidence
- 🟡 Can AI conversation records be collected without a warrant?: Conflict between Article 17 of the Constitution (right to privacy) and investigative authority
- 🔵 AI companies' obligation to preserve evidence: Absence of legal standards for whether OpenAI, Google, etc. must provide user data to investigators
- 🟢 Growing self-censorship: 'ChatGPT phobia' — the phenomenon of people hesitating to ask sensitive questions of AI
Checklist: What AI Users Need to Know
Risk Assessment
| Type | Level | Details |
|---|---|---|
| Privacy Violation | 🔴 High | Possibility of broad investigative use of AI conversations |
| Misinformation Risk | 🟡 Medium | Factual discrepancies between overseas and domestic reporting |
| Self-Censorship Concern | 🟠 Medium | Social risk of chilling effect on AI usage |
| Regulatory Gap | 🔴 High | Urgent need to legislate standards for AI evidence collection and use |
Reference Links
- BBC: Woman accused of using ChatGPT to plan drug murders
- Korea Herald: What you ask AI could land you in court
- TechRadar: ChatGPT search trail becomes central evidence in South Korea double murder probe
- 투데이신문: AI 대화 기록, 수사 증거로 부상하나
- 법률신문: 챗GPT 프롬프트 터는 경찰… AI 포렌식 시대
Image credit: 대한민국 헌법재판소 전경, Wei-Te Wong (Wikimedia Commons, CC BY-SA 2.0)