The Wall Street Journal | By: Sam Schechner Updated June 15, 2017 3:57 p.m. ET:

New software is tasked with identifying videos, photos, language and users that need to be removed, at times without human moderators.

Under intense political pressure to better block terrorist propaganda on the internet, Facebook Inc. FB -0.30% is leaning more on artificial intelligence.

The social-media firm said Thursday that it has expanded its use of A.I. in recent months to identify potential terrorist postings and accounts on its platform—and at times to delete or block them without review by a human. In the past, Facebook and other tech giants relied mostly on users and human moderators to identify offensive content.

Even when algorithms flagged content for removal, these firms generally turned to humans to make a final call.

Companies have sharply boosted the volume of content they have removed in the past two years, but these efforts haven’t proven effective enough to tamp down a groundswell of criticism from governments and advertisers. They have accused Facebook, Google parent Alphabet Inc. and others of complacency over the proliferation of inappropriate content—in particular, posts or videos deemed as extremist propaganda or communication—on their social networks.

British Prime Minister Theresa May ratcheted up complaints this month in the wake of a series of deadly terror attacks in the U.K., and sought new international agreements to regulate the internet and force technology companies to preemptively filter content.

In response, Facebook disclosed new software that it says it is using to better police its content. One tool, in use for several months now, combs the site, including live videos, for known terrorist imagery, like beheading videos, to stop them from being reposted, executives said Thursday. The tool, however, doesn’t identify new violent videos like the Cleveland murder that was posted on Facebook in April.

Another set of algorithms attempts to identify—and sometimes autonomously block—propagandists from opening new accounts after they have already been kicked off the platform. Another experimental tool uses A.I. that has been trained to identify language used by terrorist propagandists.

Facebook declined to say what portion of extremist material it removes is being blocked or removed automatically, and what percentage is reviewed by humans. The firm’s moves reflect a growing willingness to trust machines to help even in part with thorny tasks like distinguishing inappropriate content from satire or news coverage—something firms resisted after a spate of attacks just two years ago as a potential threat to free speech.

To read full article – please click here.