Facebook Inc is working on automatically flagging offensive material in live video streams, building on a growing effort to use artificial intelligence to monitor content, said Joaquin Candela, the company’s director of applied machine learning.

The social media company has been embroiled in a number of content moderation controversies this year, from facing international outcry after removing an iconic Vietnam War photo due to nudity, to allowing the spread of fake news on its site.

Facebook has historically relied mostly on users to report offensive posts, which are then checked by Facebook employees against company “community standards.” Decisions on especially thorny content issues that might require policy changes are made by top executives at the company.

Candela told reporters that Facebook increasingly was using artificial intelligence to find offensive material. It is “an algorithm that detects nudity, violence, or any of the things that are not according to our policies,” he said. The company already had been working on using automation to flag extremist video content, as Reuters reported in June.

Read more


NEWSLETTER SIGN UP

Get the latest breaking news & specials from Alex Jones and the Infowars Crew.

Related Articles


Comments