The Daily Wire reported today on a study conducted by Global Witness and the NYU Cybersecurity for Democracy which tested TikTok, YouTube, and Facebook for their ability to detect and deter death threats against election workers.
According to the study, YouTube and TikTok proved better at following their own policies about inciting violence. Facebook, however, did not.
The mega media platform permitted 15 of the 20 ads released; these ads were created based on real-life death threat ads on social media which target poll workers.
The study report states that the ads Facebook permitted “…included statements that people would be killed, hanged, or executed, and that children would be molested .”
The researchers concluded the following:
“Platforms need to demonstrate that they can enforce their own policies. In particular, the track record of Facebook in being able to detect and remove the worst kinds of dangerous content is appallingly bad: their policies may look reasonable on paper but they are meaningless unless they are enforced.”
Facebook is good at talking the talk, but when it comes to speech that actually targets individuals with horrific threats of violence, they do nothing.
If the study had tested Facebook with ads that questioned the 2020 election, presented “anti-vax” rhetoric, or “misgendered” someone, the results would have been starkly different. This isn’t a question of algorithmic sophistication, it's a question of values. Facebook’s are clear.