By Daniel Alphonsus
“Ripping apart the social fabric” and “destroying how society works”. These apocalyptic portrayals of social media were once the dirge of neo-luddites. Today they are the words of Chamath Palihapitiya, a Silicon Valley titan and one of Facebook’s earliest employees. He echoes the zeitgeist in hailing “the question at hand” for these troubled times as “the business model of truth and who is responsible for it”.
But before you can have a business model for truth, you need a risk model for lies. And just as credit-rating agencies assess default risk, we need fact-rating agencies to assess inaccuracy risk. This is a sensitive and difficult task. But it is implausible to imagine that we can’t develop the tools to determine whether a newspaper should be awarded a triple-A accuracy rating – or be relegated to junk-news.
Financial markets have developed sophisticated data-gathering, verification and modelling that determine Facebook’s credit rating; similar methods can be used to develop accuracy ratings for content on its platform. In fact, year in and year out, humans have found ways of assessing the most contentious and complex issues facing humanity. Countries are ranked by their living standards, ease of doing business and corruption. Firms too are scored on their corporate responsibility, carbon footprint and inclusivity.
To be clear, the challenge of accuracy rating newspapers is immense. Tens of thousands of articles are written every hour in hundreds of different languages, the Washington Post alone publishes over 500 articles a day. But society has already started to respond. Some organisations, like Snopes, try to fact-check individual pieces of content. The startup Knowhere (pronounced know-where) hopes to generate God’s eye view of events. Harnessing AI they intend to produce news that cross-checks against the entire corpus of information available on the web. Others, like the International Fact Checking Network, encourage existing news-organs to sign up to codes of conduct and perform rudimentary verification.
These initiatives are important. But they are insufficient. For the time being, the resources required to fact-check every individual piece of content are beyond our collective means. AI approaches, like Knowhere’s, currently have their limits too. They depend on information of inaccuracy entering the public domain. For example, they have no way of identifying the 1,153 errors the Der Spiegel’s fact-checkers once found in a single issue of their magazine.1
For now, its seems the only affordable way of assessing accuracy is by provenance – asking whether it originates from a reliable source. This shifts the focus from identifying the reliability of content to the reliability of newspapers.
The first method, process compliance, is verifying that appropriate structures are established and processes are complied with. Essentially this involves establishing a criteria for platform membership (think of the NYSE listing requirements or ISO standards) and using independent auditors to verify compliance. For example, an auditor could ask for proof to demonstrate that an article had been fact-checked before publication, and then look at the fact-checking paper trail too. Or the auditor could interview the staff to ensure that a source’s claim has been cross-verified by another reporter.
The second method is observing behavior. By sampling a newspaper’s content, subjecting it to fact-checks and using statistical methods to extrapolate from the sample to the quality of a newspaper’s content as a whole, one can separate the sheep from the goats. This is standard operating procedure is used to prevent actual deaths. Randomized sampling for quality control and detection is de rigeuer in product testing, construction and border control.
By contrast, it’s striking that Facebook, Google and Twitter’s standards and verification procedures for allowing news-content on their platforms are effectively non-existent. They certainly much looser than listing on the New York Stock Exchange, joining the World Trade Organization or avoiding sanctions by proving to the International Atomic Energy Authority you aren’t developing nuclear weapons. For example, suppose a firm wants to list on the New York Stock Exchange, or even the tiny Colombo Stock Exchange. This requires letting in auditors to pour over your company’s most intimate financial laundry, implementing corporate governance requirements that determine who can sit on your boards and established specificed processes for making decisions. In other words, there is a clear criteria for membership and means for verifying compliance, such as auditing and whistleblowing.
Coming back to Chamath’s question of a business model for truth, the first problem is payment, even the cheapest of these options needs money. Newspapers themselves are in no position to pay for this public good. The second problem is generating the incentives for social media giants to use the ratings. Their altruism can’t be trusted: the stakes are too high and, after all, they’ve done a dismal job already.
Truth is a public good. And business models don’t work very well in that domain. The responsibility for truth will remain collective. The government, media, companies, activists and people need to work together to help create the institutions and incentives that will create and sustain truth. Humanity has done it before. From the IAEA to the Environmental Protection Authority, we have developed a host of institutions born out of crisis to verify the truth. The challenge is to adopt those models for today’s problems.
Many of this article’s suggestions will prove unworkable in practice. But that is not the point. Unless we find ways of holding social-media titans accountable we will get the news that we deserve and the Promethean fate that could follow.
Daniel Alphonsus is a Fulbright scholar at Harvard’s Kennedy School of Government. He acknowledges the encouragement of colleagues at HKS’ Shorenstein Centre and Future Society.