A series of advertisements dehumanizing and calling for violence against Palestinians, intended to test Facebook’s content moderation standards, were all approved by the social network, according to materials shared with The Intercept.
As Meta’s human-based moderation, which historically relied almost entirely on outsourced contractor labor, has drawn greater scrutiny and criticism, the company has come to lean more heavily on automated text-scanning software to enforce its speech rules and censorship policies.
Large-scale incitement to violence jumping from social media into the real world is not a mere hypothetical: In 2018, United Nations investigators found violently inflammatory Facebook posts played a “determining role” in Myanmar’s Rohingya genocide.
The company’s Community Standards rulebook — which ads are supposed to comply with to be approved — prohibit not just text advocating for violence, but also any dehumanizing statements against people based on their race, ethnicity, religion, or nationality.
Though 7amleh told The Intercept the organization had no intention to actually run these ads and was going to pull them before they were scheduled to appear, it believes their approval demonstrates the social platform remains fundamentally myopic around non-English speech — languages used by a great majority of its over 4 billion users.
In its official response to last year’s audit, Facebook said its new Hebrew-language classifier would “significantly improve” its ability to handle “major spikes in violating content,” such as around flare-ups of conflict between Israel and Palestine.
The original article contains 1,282 words, the summary contains 235 words. Saved 82%. I’m a bot and I’m open source!
Pretty decent, although the inclusion of the last paragraph without the following paragraphpointing out that this incident shows that’s not true is a little misleading if you’re not paying attention.
This is the best summary I could come up with:
A series of advertisements dehumanizing and calling for violence against Palestinians, intended to test Facebook’s content moderation standards, were all approved by the social network, according to materials shared with The Intercept.
As Meta’s human-based moderation, which historically relied almost entirely on outsourced contractor labor, has drawn greater scrutiny and criticism, the company has come to lean more heavily on automated text-scanning software to enforce its speech rules and censorship policies.
Large-scale incitement to violence jumping from social media into the real world is not a mere hypothetical: In 2018, United Nations investigators found violently inflammatory Facebook posts played a “determining role” in Myanmar’s Rohingya genocide.
The company’s Community Standards rulebook — which ads are supposed to comply with to be approved — prohibit not just text advocating for violence, but also any dehumanizing statements against people based on their race, ethnicity, religion, or nationality.
Though 7amleh told The Intercept the organization had no intention to actually run these ads and was going to pull them before they were scheduled to appear, it believes their approval demonstrates the social platform remains fundamentally myopic around non-English speech — languages used by a great majority of its over 4 billion users.
In its official response to last year’s audit, Facebook said its new Hebrew-language classifier would “significantly improve” its ability to handle “major spikes in violating content,” such as around flare-ups of conflict between Israel and Palestine.
The original article contains 1,282 words, the summary contains 235 words. Saved 82%. I’m a bot and I’m open source!
Pretty decent, although the inclusion of the last paragraph without the following paragraphpointing out that this incident shows that’s not true is a little misleading if you’re not paying attention.
In short: ads designed specifically to be blatantly and cartoonishly evil passed their moderation because Facebook agreed with the message.