In the US presidential election and the Brexit referendum, Facebook and Twitter not only allowed fake news to proliferate, but withheld data on voter intentions, says Philip Howard
Social media companies are being criticised for influencing the US presidential election and the Brexit referendum by allowing fake news, misinformation, and hate speech to spread.
But Facebook’s and Twitter’s sin was omission: They failed to contribute to the data that democracy needs to thrive.
While sitting on huge troves of information about public opinion and voter intent, social-media companies watched as US and UK pollsters, journalists, politicians, and civil society groups made bad projections and poor decisions with the wrong information.
The data these companies collect could have told us whether or not fake news was impacting voters. Information garnered from social media platforms could have boosted voter turnout, as citizens realised the race was closer than the polls showed, and that their votes would matter. Instead, these companies let the US and UK tumble into democratic deficits, with political institutions starved of quality data on public opinion.
Legally, social media companies are not obligated to share data in the public interest. What they can share is shaped by users’ privacy settings, country-specific rules about selling personal information, and the deals companies such as Facebook and Twitter make with third-party businesses.
But social media are now the primary platforms for political conversation. As such, they should support democratic practices, especially in sensitive political moments like elections.
Facebook and Twitter have the ability to reach, and target, millions of voters. From the minute you sign up to one of these platforms, the companies use data about your behaviour, interests, family, and friends to recommend news and social connections. And they sell this data to other companies, for deeper analysis on what you might buy and what you think about social issues.
By examining data about the connections you make and the content you share, social media companies can make powerful inferences about whether or not you are likely to vote, how you are likely to vote, and what kinds of news or advertisements might encourage or discourage you to engage as a citizen.
Social media companies study the news consumption habits of their users, producing fine-grained analysis of the causes and consequences of political polarisation. To that end, only Facebook and Twitter know how pervasive fabricated news stories and misinformation campaigns have become during referendums and elections.
They know who clicked on what links, how much time each user spent reading an article, and where the user was physically located. If the companies merged user data with other data sets — say, from credit card records or voter registration files — they might even know the user’s voting history and to which political groups the user has donated.
These companies know enough about voter attitudes to serve up liberal news to liberals and conservative news to conservatives, or fake news to undecided voters. During the recent US presidential election, there was a worrying amount of false information on both Facebook and Twitter, and many users can not distinguish between real and fabricated news.
My own research on this ‘computational propaganda’ shows that Facebook and Twitter can be easily used to poison political conversations. Trump campaigners were particularly good at using bots — basic software programmes with communication skills — to propagate lies.
Bogus news sites were started just to make money for their founders, but undoubtedly influenced some voters’ views when manipulated images and false reports went viral.
Several major US tech companies have since announced steps to rein in fabricated news. For example, Facebook CEO Mark Zuckerberg says he will make it easier for users to report fake news. Facebook has also updated its advertising policies to spell out that its ban on deceptive and misleading content applies. Google will prevent websites that spread bogus news from using its advertising platform. But more can be done.
While social media use has been on the rise, our systems for measuring public opinion have been breaking down. Telephone- and internet-based surveys are increasingly inaccurate.
With so many people on mobile phones, consuming political content they receive through friends, family, and Facebook, traditional polling companies no longer get a full picture of what the public knows and wants.
For modern democracies to work, three kinds of polling systems need to be up and running. First, nationwide exit polls, which identify mistakes in how elections are run, help to confirm or refute claims of fraud.
For several decades, exit polling was co-ordinated by major news outlets, but the coalition broke down in the US in 2002, and in the UK in 2005. Today, exit polls are run haphazardly, and are more about predicting winners and outcomes than systematically checking the results.
Second, democracies need a regular supply of public policy polls, so journalists, policy makers, civic groups, and elected officials can understand public opinion, before and after voting day.
Third, functioning democracies need ‘deliberative polls’ that put complex policy questions to representative groups of voters, who are given time to evaluate the possible solutions. These polls engage citizens through extended conversations with experts and each other. They lead to informed decision-making.
Facebook and Twitter manage the platforms over which most citizens in advanced democracies now talk about politics, and they could be the platforms for these polling systems.
They won’t completely replace existing techniques for measuring public opinion. But our polling systems are weakening, and social media platforms have a role to play. With the data at their disposal and the platforms they maintain, social media firms could raise standards for civic participation by refusing to accept ad revenue for fake news. They could let others audit and understand the algorithms that determine who sees what on a platform. They could be the platforms for better opinion, exit, and deliberative polling. This year, Facebook and Twitter watched as ways of measuring public opinion collapsed. Allowing fake news and computational propaganda to target specific voters is an act against democratic values. But withholding data about public opinion is the major crime against democracy.
Philip N Howard is a professor of sociology, information, and international affairs at Oxford University. He is the author, most recently, of Pax Technica: How the Internet of Things May Set Us Free or Lock Us Up
© Irish Examiner Ltd. All rights reserved