Over the past year, tech giants have come under increasing scrutiny for their roles in spreading viral misinformation that might have helped to decide the 2016 presidential election.
Some of them are now testing the use of “trust indicators” to highlight news sources that meet certain quality and reliability standards. Developed over the past three years by news organization representatives working with the nonpartisan Trust Project, these indicators are aimed at providing readers with more transparency about the news outlets, journalists, financial sponsorship, and methods behind the stories they read, hear, or see.
Among the tech companies that have agreed to use such indicators for their content are Bing, Facebook, Google, and Twitter. The decision is the latest sign these companies are starting to recognize the extent of the problem with misinformation, propaganda, and “fake news” online.
‘Harder than Ever To Tell What’s Accurate’
Sally Lehrman, a former writer and editor at the San Francisco Examiner and journalism instructor at California’s Santa Clara University, began talking with news editors in 2014 about the impact that technology was having on the quality of news reporting; her work led to the launch of the Trust Project, now hosted by the university’s Markkula Center for Applied Ethics.
“In today’s digitized and socially networked world, it’s harder than ever to tell what’s accurate reporting, advertising, or even misinformation,” Lehrman said in yesterday’s announcement from the Trust Center. “An increasingly skeptical public wants to know the expertise, enterprise and ethics behind a news story. The Trust Indicators put tools into people’s hands, giving them the means to assess whether news comes from a credible source they can depend on.”
In addition to agreeing to use the project’s trust indicators, Bing, Facebook, Google, and Twitter are looking into other ideas that can better highlight reliable news reporting….