Hey everyone — here’s a quick digest of major developments in child safety online over the past week.
Key Stories You Should Know:
1. Meta tightens AI rules in response to leaks
Internal documents revealed how Meta trains its chatbots to handle sensitive topics involving children (like child sexual exploitation). They’re now adopting stricter rules forbidding roleplay involving minors and romantic content.🔗 [Business Insider] 2. Instagram safety tools falling short, whistleblowers say
A new report found that ~64% of Instagram’s “teen safety” tools can be bypassed. Adults were able to message underage users and harmful content filters failed during tests.🔗 [The Guardian]🔗[Reuters] 3. FTC launches investigation into AI chatbots & child safety
The FTC has sent letters to major tech firms — including Meta, OpenAI, Snap — demanding details on how they mitigate harm to minors using their chatbots as companions.🔗 [AP News] 4. Lawsuit over AI’s role in teen suicide resurfaces
The story of the Raine v. OpenAI lawsuit continues to ripple. It alleges ChatGPT’s interactions pushed a teen into isolation and despair by acting as his primary emotional confidant.🔗 [Washington Post] As we can see, yes things seem to be moving towards the right path, but nothing compares to an alert parent.
Let's keep doing our job, demanding improvements in both businesses and legislation.