Follow America's fastest-growing news aggregator, Spreely News, and stay informed. You can find all of our articles plus information from your favorite Conservative voices. 

The family of a girl injured in the Tumbler Ridge school shooting has filed a lawsuit accusing OpenAI’s ChatGPT of knowing the shooter’s plans but failing to alert authorities, alleging the company flagged dangerous conversations yet only banned the account instead of notifying law enforcement. The case centers on whether AI platforms bear a duty to act when employees identify clear, imminent threats in user chats. Plaintiffs say the suspect used the chatbot as a confidante, described violent scenarios, and that multiple OpenAI staff recommended contacting Canadian police. OpenAI maintains it did not see a credible or imminent threat, while the lawsuit claims the company “took no steps to act upon this knowledge.”

The complaint says an initial ChatGPT account linked to the suspect was banned by OpenAI in June 2025 after troubling conversations, but no notification was made to Canadian authorities. According to the suit, the banned user then opened another account and continued interacting with the service. OpenAI’s response, as reported in public coverage, is that content moderation led to account actions without triggering the company’s threshold for contacting law enforcement.

The plaintiffs allege a sequence of failures inside the company: that multiple employees flagged posts as indicating an imminent risk of serious harm to others and recommended alerting police, but those recommendations were rebuffed. The suit frames this as more than an error in moderation; it claims OpenAI had “specific knowledge of the shooter’s long-range planning of a mass casualty event.” That stark language is the legal core, and it pushes the case into new territory about platform responsibility and the limits of content moderation defenses.

On the ground in Tumbler Ridge, the consequences were devastating. The shooter killed eight people, including six children, and then took his own life as police closed in. One victim, twelve-year-old Maya Gebala, was “shot in the neck and head” and remains hospitalized, according to quoted reporting preserved in legal filings and media coverage. The lawsuit was filed by Maya’s mother, who seeks accountability for what the family says was preventable harm.

Twelve-year-old Maya Gebala was shot in the neck and head in the attack in Tumbler Ridge on 10 February and remains in hospital.

An initial ChatGPT account linked to the suspect, 18‑year‑old Jesse Van Rootselaar, was banned by OpenAI in June 2025 due to the nature of her conversations with the chatbot, but Canadian police were not notified.

OpenAI told the BBC it was committed to making “meaningful changes” to help prevent similar tragedies in the future.

The court filing describes detailed conversations between the suspect and the chatbot, portraying the platform as a confidante for violent planning. Plaintiffs say those chats included “various scenarios involving gun violence” and that OpenAI employees recognized the discussions as rising to an alarm level. If the employees’ internal flags and recommendations are proven, the lawsuit could reshape how companies document and respond to user threats.

OpenAI has defended its decisions by citing internal thresholds for notifying authorities and by arguing that bans on accounts are part of its safety system. The company insists it applied moderation policies, but the family’s lawyers counter that moderating content and banning accounts are insufficient when risk of mass harm is disclosed. The legal dispute will hinge on what the company knew, what it documented internally, and whether law imposes a duty to act beyond content removal.

The civil lawsuit, brought by Gebala’s mother Cia Edmonds, alleges Rootselaar set up an account with ChatGPT before she turned 18 – something users can do with parental consent.

The plaintiffs allege no age verification took place on the site.

The lawsuit claims the suspect saw the chatbot as a “trusted confidante” and described “various scenarios involving gun violence” to it over several days in late spring or early summer 2025.

Twelve OpenAI employees then reportedly flagged the posts as “indicating an imminent risk of serious harm to others” and recommended Canadian law enforcement was informed, the lawsuit alleges.

Instead, it is alleged the request to contact the authorities was “rebuffed” and the only action taken was to ban Rootselaar’s account.

The case raises practical questions about how AI companies handle red flags, the limits of anonymity and account restrictions, and how internal moderation decisions map onto public safety obligations. Plaintiffs argue that internal flags should have triggered outreach to Canadian police or other interventions, while the company says its policies require a higher evidentiary threshold to label a threat as imminent. That factual gap will be central to discovery and testimony if the lawsuit proceeds.

Beyond the courtroom, the claim feeds into a larger debate over platform liability, the role of content moderation, and whether private companies must act as de facto first responders when users disclose violent intent. Regulators and lawmakers are watching similar cases closely because outcomes could influence both industry practices and potential regulation. For families who lost loved ones, legal remedies are one avenue to demand accountability and change.

The lawsuit also touches on user age and verification, with allegations that the suspect created an account before turning 18 and that no meaningful age checks occurred. Those points expand the dispute into questions about parental consent, protections for minors, and how platforms verify identity when safety risks appear. Expect this litigation to probe internal logs, employee communications, and the company’s safety playbook for red flags.

The legal process will determine whether OpenAI’s actions met legal and ethical obligations or whether the company will be found liable for failing to act on information its employees believed signaled imminent danger. In the meantime, the Tumbler Ridge tragedy remains a painful reminder of the human stakes behind policy and product decisions made in the tech world.

Add comment

Your email address will not be published. Required fields are marked *