Florida Shooting Investigation Puts OpenAI Under Legal Scrutiny

A criminal investigation into OpenAI has opened in the United States after authorities alleged that a student who carried out a deadly shooting at Florida State University used ChatGPT to seek advice before the attack.

Florida Attorney General James Uthmeier said investigators found evidence showing that Phoenix Ikner, accused of killing two people and injuring six others on campus last year, had asked the AI chatbot about weapons, ammunition and methods to maximise casualties.

According to investigators, the chatbot responded to the questions, raising concerns about whether AI developers could face criminal responsibility when their systems are linked to violent crimes.

“If the thing on the other side of the screen was a person, we would charge it with homicide,” Uthmeier said while announcing the investigation. He added that prosecutors were examining whether charges could eventually be brought against OpenAI or individuals connected to the company.

The case has intensified debate over the legal responsibilities of artificial intelligence firms as AI tools become more widely used in daily life. Legal analysts say the investigation could become one of the first major attempts to test criminal liability involving generative AI systems.

Unlike previous corporate criminal cases involving companies such as Purdue Pharma or Volkswagen, the Florida case centres on the actions of an AI product rather than direct decisions made by executives or employees.

Matthew Tokson, a law professor at the University of Utah, said the case presents a difficult legal challenge because prosecutors would need to prove that the company acted negligently or recklessly despite knowing the risks associated with its technology.

Experts note that criminal prosecutions of corporations are relatively uncommon in the United States and usually require proof beyond a reasonable doubt. Some legal scholars believe a civil lawsuit may offer a more practical route for families seeking accountability.

OpenAI defended its platform, saying it continues to improve safeguards designed to detect harmful intent and reduce misuse. The company stated that it regularly updates safety measures to prevent dangerous or violent requests from being fulfilled.

The investigation comes as several civil lawsuits have already been filed in the US against AI companies over claims that chatbots contributed to suicides or violent behaviour. None of those cases has yet produced a court ruling against an AI developer.

Brandon Garrett of Duke University said the case also highlights the absence of clear national regulations governing artificial intelligence systems.

Lawmakers in Washington have faced growing pressure to establish legal standards for AI companies as concerns increase over misinformation, cybercrime and the potential misuse of advanced chatbot technology.

Leave a Reply