Facebook claims it uses AI to identify and remove posts containing hate speech and violence, but the technology doesn't really work, report says

A photo of Mark Zuckerberg, the CEO and cofounder of Facebook.
Facebook CEO Mark Zuckerberg.
  • Facebook's artificial intelligence removes less than 5% of hate speech viewed on the social media platform.
  • A new report from the Wall Street Journal details flaws in the platform's strategy to remove harmful content.
  • Facebook whistleblower Frances Haugen said that the company dangerously relies on AI and algorithms.

Facebook claims it uses artificial intelligence to identify and remove posts containing hate speech and violence, but the technology doesn't really work, according to internal documents reviewed by the Wall Street Journal.

Facebook senior engineers say that the company's automated system only removed posts that generated just 2% of the hate speech viewed on the platform that violated its rules, the Journal reported on Sunday. Another group of Facebook employees came to a similar conclusion, saying that Facebook's AI only removed posts that generated 3% to 5% of hate speech on the platform and 0.6% of content that violated Facebook's rules on violence.

The Journal's Sunday report was the latest chapter in its "Facebook Files" that found the company turns a blind eye to its impact on everything from the mental health of young girls using Instagram to misinformation, human trafficking, and gang violence on the site. The company has called the reports "mischaracterizations."

Facebook CEO Mark Zuckerberg said he believed Facebook's AI would be able to take down "the vast majority of problematic content" before 2020, according to the Journal. Facebook stands by its claim that most of the hate speech and violent content on the platform gets taken down by its "super-efficient" AI before users even see it. Facebook's report from February of this year claimed that this detection rate was above 97%.

Some groups, including civil rights organizations and academics, remain skeptical of Facebook's statistics because the social platform's numbers don't match external studies, the Journal reported.

"They won't ever show their work," Rashad Robinson, president of the civil rights group Color of Change, told the Journal. "We ask, what's the numerator? What's the denominator? How did you get that number?"

Facebook's head of integrity, Guy Rosen, told the Journal that while the documents it reviewed were not up to date, the intel influenced Facebook's decisions about AI-driven content moderation. Rosen said it is more important to look at how hate speech is shrinking on Facebook overall.

Facebook did not immediately respond to Insider's request to comment.

The latest findings in the Journal also come after former Facebook employee and whistleblower Frances Haugen met with Congress last week to discuss how the social media platform relied too heavily on AI and algorithms. Because Facebook uses algorithms to decide what content to show its users, the content that is most engaged with and that Facebook subsequently tries to push to its users is usually angry, divisive, sensationalistic posts that contain misinformation, Haugen said.

"We should have software that is human-scaled, where humans have conversations together, not computers facilitating who we get to hear from," Haugen said during the hearing.

Facebook's algorithms can sometimes have trouble determining what is hate speech and what is violence, leading to harmful videos and posts being left on the platform for too long. Facebook removed nearly 6.7 million pieces of organized hate content off of its platforms from October through December of 2020. Some posts removed involved organ selling, pornography, and gun violence, according to a report by the Journal.

However, some content that can be missed by its systems includes violent videos and recruitment posts shared by individuals involved in gang violence, human trafficking, and drug cartels.

Read the original article on Business Insider


Contributer : Business Insider https://ift.tt/3n6FtKV
Facebook claims it uses AI to identify and remove posts containing hate speech and violence, but the technology doesn't really work, report says Facebook claims it uses AI to identify and remove posts containing hate speech and violence, but the technology doesn't really work, report says Reviewed by mimisabreena on Monday, October 18, 2021 Rating: 5

No comments:

Sponsor

Powered by Blogger.