“Earlier this year, we began hosting regular consultations with experts from around the world to discuss some of the more difficult topics associated with suicide and self-injury. These include how we deal with suicide notes, the risks of sad content online and newsworthy depiction of suicide,” Antigone Davis, Global Head of Safety, Facebook, wrote in a blog post on Tuesday.
The social media giant has been working on suicide prevention measures since a few years now and in 2017, it introduced its artificial intelligence (AI)-based suicide prevention tools.
“…We’ve made several changes to improve how we handle this content. We tightened our policy around self-harm to no longer allow graphic cutting images to avoid unintentionally promoting or triggering self-harm, even when someone is seeking support or expressing themselves to aid their recovery,” Davis added.
Facebook-owned Instagram stared hiding self-harm images behind “sensitivity screens” this year.