OpenAI sued over Canada school shooting failure

The family of a 12-year-old critically injured in the mass attack last month claim the ChatGPT-maker failed to flag the shooter’s violent activity
ChatGPT owner OpenAI had knowledge that a transgender user in Canada was planning a mass shooting attack but failed to notify law enforcement, the parents of a young girl critically injured in the incident alleged in a civil lawsuit on Monday.
The mother of 12‑year‑old Maya Gebala, who remains hospitalized after the shooting in Tumbler Ridge on February 10 that left nine dead, stated in the lawsuit that the tech company failed to notify authorities about violent chat prompts from the shooter.
Jesse Van Rootselaar, the 18-year-old transgender shooter, committed suicide after killing several students in one of the worst incidents in Canadian history.
The lawsuit, filed in the British Columbia Supreme Court, alleges that OpenAI had “specific knowledge of the shooter utilizing ChatGPT to plan a mass casualty event like the Tumbler Ridge mass shooting” but did not alert law enforcement.
The claim states that Gebala was shot three times, sustaining a “catastrophic, traumatic brain injury” that caused permanent cognitive and physical disabilities, along with other serious medical complications.
OpenAI has said it considered reporting the user’s activity to police but ultimately did not. The company only provided information to authorities after the attack, noting that the shooter’s ChatGPT account had been closed but that Rootselaar evaded the ban by creating a second account.
The suit contends that ChatGPT acted as a “trusted confidant, collaborator and ally” for the shooter and alleges the company’s alleged inaction contributed to the severity of the girl’s injuries and harm to others in the community.
Last month, Canadian officials summoned senior OpenAI representatives to Ottawa to review the company’s safety protocols following the school shooting. Canada’s artificial intelligence minister, Evan Solomon, said earlier this month that OpenAI CEO Sam Altman has agreed to give Canadian experts access to the company’s safety office to help assess potential future threats. Solomon met with Altman last week, who expressed “horror and responsibility” for failing to flag concerning activity and said the company is implementing changes.
Last year, OpenAI updated ChatGPT following an internal review that found more than a million users had disclosed suicidal thoughts to the chatbot. Psychiatrists have warned that extended interactions with AI can contribute to delusions and paranoia, a phenomenon sometimes referred to as “AI psychosis.”
Source: https://swentr.site