Ashley Belanger / arstechnica - ChatGPT taught teen jailbreak so bot could assist in his suicide, lawsuit says.
Back to Top / Tuesday, August 26, 2025, 3:20 pm / permalink 13132 / 7 stories in 6 months
ChatGPT Adds Mental Health Safeguards Before GPT‑5 / 7 months
ChatGPT Updates: Bolstered Mental Health Safeguards Announced / 7 months
OpenAI Investigates Deceptive Behavior in Chatbot Models / 5 months
ChatGPT safety update introduces parental controls and age prediction / 5 months
ChatGPT adopts age verification after teen suicide lawsuit controversy / 5 months
FTC probes AI chatbot practices protecting minors / 5 months
FTC probes AI chatbot safety on kids / 5 months
NorthFeed Inc.
Disclaimer: The information provided on this website is intended for general informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the content. Users are encouraged to verify all details independently. We accept no liability for errors, omissions, or any decisions made based on this information.