OpenAI Sued by Families of Canada School Shooting Victims

April 29, 2026 by

OpenAI is the target of new lawsuits over the mass shooting in Tumbler Ridge, British Columbia, that allege the artificial intelligence company could have stopped the suspected killer from using its popular chatbot, ChatGPT, ahead of the attack.

One of the cases, which were filed Wednesday in federal court in San Francisco against OpenAI and its chief executive officer, Sam Altman, was brought by a 12-year-old, who was shot during the incident and remains in intensive care, and her mother. Another lawsuit was brought by the mother of a girl killed in the shooting.

Related: OpenAI IDs Security Issue Involving Third-Party Tool, Says User Data Was Not Accessed

According to the lawsuits, OpenAI knew that Jesse Van Rootselaar, who was identified as the chief suspect behind the massacre in February at Tumbler Ridge Secondary School, was planning the attack due to the shooter’s ChatGPT use, but made a “conscious decision not to warn authorities.”

“ChatGPT played a role in the mass shooting and OpenAI could have, and should have, prevented it,” according to the complaints, which allege the startup wanted to avoid having to contact police each time OpenAI’s safety team spotted a ChatGPT user planning to carry out a violent act.

OpenAI didn’t immediately respond to a request for comment.

A series of suits have been filed so far against chatbot makers since 2024, most of them targeting OpenAI and ChatGPT. Most of the suits allege that extensive use of the technology has inflicted a range of harms on children and adults alike, fostering delusions and despair for some and leading others to death by suicide and even murder-suicide.

On Feb. 10, Van Rootselaar allegedly carried out the mass shooting in northeastern British Columbia, killing eight people — including her mother and stepbrother, along with six others at the school, five of whom were children, and injuring more than two dozen others. Van Rootselaar, 18, was found dead after the shooting from what appeared to be a self-inflicted wound.

In the wake of the shooting, OpenAI said it banned Van Rootselaar for violating its ChatGPT usage policy last June. Her account was flagged at the time for messages deemed to have potential for violence, but OpenAI did not alert police. The Wall Street Journal first reported on OpenAI’s decision, saying concerned employees urged the startup to report the situation to authorities.

Later in February, OpenAI revealed that the suspected killer created a second ChatGPT account it did not spot until her name was released by police; OpenAI told Canadian lawmakers that, under newly updated company rules, it would have referred Van Rootselaar to police.

Last week, Altman wrote in a letter published by Tumbler RidgeLines, a local news site, that he wanted to express his “deepest condolences to the entire community.”

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman wrote. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”

The lawsuits come at a sensitive time for OpenAI, which is eyeing a much-anticipated public offering that’s poised to be one of the largest in history as the company approaches a trillion-dollar valuation.

OpenAI is also trying to fend off claims by Elon Musk that it abandoned its founding mission as a nonprofit when it restructured last year as a for-profit entity. At a trial in California that started this week, Musk may ask a judge to order the conversion to be unwound.

Top photo: A symbol for the OpenAI virtual assistant on a smartphone, arranged in Riga, Latvia, on Friday, Aug. 16, 2024. Photographer: Andrey Rudakov/Bloomberg.