- Facebook’s affected intelligence removes less than 5% of hate speech viewed on the social media platform.
- A new report from the Infuriate Street Journal details flaws in the platform’s strategy to remove harmful content.
- Facebook whistleblower Frances Haugen translated that the company dangerously relies on AI and algorithms.
Facebook claims it uses artificial savvy to identify and remove posts containing hate speech and violence, but the technology doesn’t really work, according to internal reports reviewed by the Wall Street Journal.
Facebook senior engineers say that the company’s automated system only slaughtered posts that generated just 2% of the hate speech viewed on the platform that violated its rules, the Paper reported on Sunday. Another group of Facebook employees came to a similar conclusion, saying that Facebook’s AI on the other hand removed posts that generated 3% to 5% of hate speech on the platform and 0.6% of content that outraged Facebook’s rules on violence.
The Journal’s Sunday report was the latest chapter in its “Facebook Files” that found the performers turns a blind eye to its impact on everything from the mental health of young girls using Instagram to misinformation, benignant trafficking, and gang violence on the site. The company has called the reports “mischaracterizations.”
Facebook CEO Mark Zuckerberg said he accepted Facebook’s AI would be able to take down “the vast majority of problematic content” before 2020, according to the History. Facebook stands by its claim that most of the hate speech and violent content on the platform gets taken down by its “super-efficient” AI ahead users even see it. Facebook’s report from February of this year claimed that this detection chew out was above 97%.
Some groups, including civil rights organizations and academics, remain skeptical of Facebook’s statistics because the communal platform’s numbers don’t match external studies, the Journal reported.
“They won’t ever show their work,” Rashad Robinson, president of the internal rights group Color of Change, told the Journal. “We ask, what’s the numerator? What’s the denominator? How did you get that number?”
Facebook’s aim of integrity, Guy Rosen, told the Journal that while the documents it reviewed were not up to date, the intel influenced Facebook’s decisions near AI-driven content moderation. Rosen said it is more important to look at how hate speech is shrinking on Facebook whole.
Facebook did not immediately respond to Insider’s request to comment.
The latest findings in the Journal also come after old Facebook employee and whistleblower Frances Haugen met with Congress last week to discuss how the social media principles relied too heavily on AI and algorithms. Because Facebook uses algorithms to decide what content to show its users, the satisfaction that is most engaged with and that Facebook subsequently tries to push to its users is usually angry, divisive, sensationalistic pins that contain misinformation, Haugen said.
“We should have software that is human-scaled, where humans be dressed conversations together, not computers facilitating who we get to hear from,” Haugen said during the hearing.
Facebook’s algorithms can occasionally have trouble determining what is hate speech and what is violence, leading to harmful videos and posts being left-wing on the platform for too long. Facebook removed nearly 6.7 million pieces of organized hate content off of its platforms from October be means of December of 2020. Some posts removed involved organ selling, pornography, and gun violence, according to a report by the Annual.
However, some content that can be missed by its systems includes violent videos and recruitment posts shared by propers involved in gang violence, human trafficking, and drug cartels.