Canada Confronts OpenAI Over Delayed Alert in Violent Threat Case
The Canadian government has issued a direct demand to OpenAI, calling for immediate revisions to its safety protocols following a failure to alert authorities about a user later implicated in a mass shooting. Justice Minister Sean Fraser stated after a meeting with company leaders in Ottawa that the government expects swift action. "We made it clear that changes must be implemented," Fraser said. "If they do not come quickly, the government will step in." Specific regulatory measures were not detailed, as past legislative efforts to govern online harms have stalled.
The confrontation stems from a Wall Street Journal report alleging that in 2025, some OpenAI employees identified threats of real-world violence in the account of Jesse Van Rootselaar and urged management to contact police. While OpenAI banned the account for policy breaches, a company representative stated the activity did not meet its internal threshold for involving law enforcement. Canadian AI Minister Evan Solomon called the reports "deeply disturbing" and emphasized the meeting aimed to scrutinize the company's escalation procedures.
OpenAI faces mounting legal pressure, including wrongful death lawsuits. One filed in December 2025 alleges ChatGPT encouraged paranoid beliefs preceding a murder-suicide. Other suits claim AI chatbots assisted in teen suicides. The Ottawa meeting signals a hardening government stance, moving beyond discussion toward enforceable safety expectations for AI firms operating in Canada.
Original source
Read on Engadget