Financial Times - Facebook turns to AI to help block terror posts

Financial Times - Facebook turns to AI to help block terror posts

the media pirate

https://t.me/media_pirate

June 15, 2017.  Hannah Kuchler.

Social network stresses new hires and tech as pressure to act mounts from governments.

Facebook has developed artificial intelligence technology to help it identify terrorist posts, while increasing its team of counter-terrorism specialists to more than 150, as the social network pushes back against accusations it does not do enough to stop extremist propaganda spreading online.

The world’s largest social network has declared it is committed to making the site a “very hostile environment for terrorists”, as politicians in Germany, the UK and France discuss imposing fines on platforms that do not take down their posts.

Recent attacks in London and Manchester have increased pressure on tech companies to show they are taking action. The UK prime minister Theresa May, who has accused the platforms of providing “safe spaces” for terrorists online, launched a joint anti-terror campaign targeting social media companies with French President Emmanuel Macron this week. 

Monika Bickert, head of global public policy at Facebook and a former prosecutor, said the company had made “significant improvements” in the past year. 

“We’re not saying we have all the answers, that we are perfect at it,” said Ms Bickert, adding Facebook had been working with academic experts, community groups and other social platforms to improve detection and takedown rates. 

‘The conversations we are seeing in the media right now show government officials saying that they want social media companies to take this seriously, they want social media to invest in ways to remove terrorist propaganda quickly. We are already doing those things, so we wanted to share how we are doing them and how much better we are doing.” 

Facebook has deployed its AI technology so that some clearly banned images — such as beheading videos — are stopped before they are ever posted to the site. It says it has also improved its ability to track terrorists to stop them creating new accounts when they are banned from the network, as well as developing a “hashing” system to share known terrorist images with other platforms. 

Brian Fishman, lead policy manager for counterterrorism, joined Facebook last year after writing a book, The Master Plan: ISIS, al-Qaeda, and the Jihadi Strategy for Final Victory, and being director of research at the United States Military Academy’s Combating Terrorism centre. He said Facebook was introducing new technology as terrorists changed their tactics in a “cat and mouse game”. 

“This is incredibly dynamic. There are things we are doing today that we weren’t doing a week ago, things we’re going to be doing in two weeks that we’re not doing today,” he said. 

Facebook has also increased the number of people in its counter-terrorism team to 150, including academics, former law enforcement agents and engineers. 

Facebook announced last month that it was also putting more humans on the problem of unsavoury content on the site — be it terror-related, violent or sexually explicit — adding 3,000 moderators to its existing team of 4,500. That announcement came after it was slow to take down videos of a murder in Ohio and a man killing his baby in Thailand

Ms Bickert said they were already in the process of hiring the new moderators and were looking for expertise in languages and subjects such as self-harm and hate speech.

“The will of the industry is there,” she said. “It is not as easy as flipping a switch.”

Read more paid articles free: https://t.me/media_pirate




Report Page