OpenAI debated calling police about suspected Canadian shooter’s chats
OpenAI's safety tools caught the warning signs — an 18-year-old's ChatGPT conversations about gun violence were flagged and her account was banned back in June 2025. Months later, she allegedly killed eight people in a mass shooting in Canada. The company's staff debated calling the police at the time but didn't, deciding it didn't meet their threshold for reporting. It's a gut-punch of a story that raises an uncomfortable question no AI company has a good answer to yet: when you see something, at what point are you obligated to say something?