Brendan Nyhan
Even now, more than two years after the 2016 election, the debate over the influence of social media on our political system still relies largely on scary anecdotes (Twitter’s 50,000-plus impostor accounts are sowing chaos!) and speculation (YouTube is turning our younger generations into conspiracy theorists!). As a result, governments around the world are taking actions to counter misinformation campaigns, many of them based on flawed understandings or illiberal impulses. It’s time for this debate to get serious and start drawing on actual research and evidence.
A quick reality check first. Social media is creating real problems for the world, but moral panics rarely result in good policy. Take the debate over the factually dubious for-profit sites whose content was shared millions of times on Facebook in the period before the 2016 election. These sites certainly polluted the public debate, but contrary to some reports, there’s no evidencethat they were responsible for Donald Trump’s victory.
In reality, research I co-authored finds that most people didn’t visit these sites at all in 2016. The same principle applies to Facebook political ads, which still have quite limited reach in 2018 relative to television ads; deepfake videos in politics, an idea where the media coverage radically outstrips the evidence of a crisis; and Russian hacking and information operations, a worrisome violation of our democratic sovereignty that was nonetheless relativelyinconsequential to 2016’s electoral outcome.
These exaggerated fears about the influence of online information are reminiscent of past panics about the influence of television and radio. In reality, information from bots, Russian trolls, and fake news websites makes up a very small percentage of the information that we see online and is unlikely to change many people’s minds.
Given these realities, we should be cautious before empowering private companies like Facebook or governments to engage in unprecedented interventions into national political debate on social media platforms. However, an evidence-based case for measured action can still be made.
First, we should worry about alternative forms of influence besides mass persuasion. Most fake news is consumed by a small minority of politically active people who already have highly skewed information diets. Their sharing and consumption of this dubious content elevated it into the national debate and helped it to penetrate mainstream institutions, like parties and interest groups. The ways in which social media amplifies fringe viewpoints and enables them to influence public debate and intermediary groups in society should thus be a greater concern than mass propaganda.
In addition, we should consider the damage these phenomena could inflict if they reached more Americans. While fake news and Russian actions are unlikely to have won the election for Trump, they demonstrated how social media can be a haven for misinformation — a worrisome precedent that could attract more dubious publishers and foreign influence operations at a vastly larger scale. Similarly, campaign ads on Facebook reach few Americans now, but represent an increasing share of political advertising. The effects of dubious and undisclosed advertising on the platform could thus be much greater in the future if regulatory controls are not enabled.
We must also enforce and defend principles of policy and regulation that apply even at the relatively low volumes of fake news and online ad exposure that we currently observe. The requirements for transparency and disclosure in campaign advertising are already in place on other media — why shouldn’t they also apply online? In this vein, laws against foreign interference in elections and hacking already exist and should be applied vigorously when they are violated.
We should be cautious before empowering private companies like Facebook or governments to engage in unprecedented interventions into national political debate on social media platforms.
In other cases, social media is highlighting a broader problem that demands a more comprehensive solution. “Dark money,” which refers to political influence efforts whose funders are not disclosed, increasingly pervades our political system. This sort of undisclosed spending does takes place on social media, but it is not specific to the format and can easily be shifted to different media if the problem is addressed in a piecemeal fashion. Similarly, as former Facebook security official Alex Stamos notes, the forms of online ad targeting that consumers find most invasive are actually the result of data brokersoutside the company aggregating data and matching it to consumers — not the social media companies themselves. This process also takes place in offline communications — for instance, in the direct mail you receive at home. Any effort to crack down the tracking and aggregation of consumer information should consider data brokers and matching to individuals on platforms, not just what the companies themselves track and record.
Finally, any evidence-based social media policy should consider potential spillover effects on trust in legitimate media coverage. Research I conducted with my students finds that while exposure to general warnings about the presence of fake news was effective in reducing belief in the accuracy of fake news headlines, it also decreased belief in mainstream media headlines. In addition, even very accurate algorithms intended to reduce the prominence of fake news stories can easily end up frequently suppressing the authentic news articles that still vastly outnumber false one.
If we can keep these principles in mind, we can avoid misguided efforts to regulate social media that could do more harm than good.