In the wake of Britain’s third major attack in three months, Prime Minister Theresa May called on governments to form international agreements to prevent the spread of extremism online.
Here’s a look at extremism on the web, what’s being done to stop it and what could come next.
Q. What are technology companies doing to make sure extremist videos and other terrorist content doesn’t spread across the internet?
A. Internet companies use technology plus teams of human reviewers to flag and remove posts from people who engage in extremist activity or express support for terrorism.
Google, for example, says it employs thousands of people to fight abuse on its platforms. Google’s YouTube service removes any video that has hateful content or incites violence, and its software prevents the video from ever being reposted. YouTube says it removed 92 million videos in 2015; 1 percent were removed for terrorism or hate speech violations.
Facebook, Microsoft, Google and Twitter teamed up late last year to create a shared industry database of unique digital fingerprints for images and videos that are produced by or support extremist organizations. Those fingerprints help the companies identify and remove extremist content. After the attack on Westminster Bridge in London in March, tech companies also agreed to form a joint group to accelerate anti-terrorism efforts.
Twitter says in the last six months of 2016, it suspended a total of 376,890 accounts for violations related to the promotion of extremism. Three-quarters of those were found through Twitter’s internal tools; just 2 percent were taken down because of government requests, the company says.
Facebook says it alerts law enforcement if it sees a threat of an imminent attack or harm to someone. It also seeks out potential extremist accounts by tracing the “friends” of an account that has been removed for terrorism.
Q. Why are technology companies clashing…