323
Audio & Video Production299
Software Development231
Automation & Workflow201
Writing & Content Creation183
Marketing & Growth176
AI Infrastructure & MLOps143
Design & Creative145
Data & Analytics110
Photography & Imaging139
Voice & Speech123
Customer Support113
Sales & Outreach108
Education & Learning116
Operations & Admin79
Court records say Phoenix Ikner exchanged over 13,000 ChatGPT messages before the 2025 Florida State shooting, as officials review AI’s role.
In short: Court records say the man accused in the 2025 Florida State University shooting exchanged over 13,000 messages with ChatGPT in the year before the attack.
Court records released by the State Attorney’s Office say Phoenix Ikner, who is accused of killing two people and injuring six others at Florida State University on April 17, 2025, had long conversations with ChatGPT. The two people who died were Robert Morales and Troy Yu Chob, who is also referred to as Chap A.
The records say Ikner sent more than 13,000 messages with the chatbot in the year leading up to the shooting. A chatbot is a computer program you can type to, like texting a customer service line, but it replies with full sentences. Prosecutors say more than 200 of the messages could be used as evidence at trial.
According to the records, the chats covered many topics, including politics, past wars, relationships, suicide, violence, weapons, and school shootings. The records also describe questions about media coverage of attacks, the busiest times on campus, and a message sent hours before the shooting asking how the nation would react to an attack at FSU.
The trial is now scheduled for October 2026 after delays. Prosecutors are seeking the death penalty, and Ikner remains in custody after being shot by police during his arrest.
OpenAI, the company that makes ChatGPT, identified and shared the suspect’s account with law enforcement after the shooting, according to the report. Attorneys connected to the case are also planning a lawsuit against OpenAI, and Florida Attorney General James Uthmeier has opened a separate investigation into OpenAI tied to risks to minors.
This case is pushing a hard question into public view, what responsibility an AI chatbot has when someone uses it while planning violence. Authorities are still reviewing whether the attack was planned ahead of time and what role, if any, the chats played.
Source: NYTimes