US Social Media Content Moderation: Junk Anti-trust Suits, Focus on Data Diet

The recent attack on the United States Capitol and Donald Trump’s subsequent ban from several major social media platforms has brought social media content moderation, and the possibility of government regulation, back into the national spotlight. The events at the Capitol demonstrate the pressing need for change in how platforms moderate and curate content, but just how this should be done is not nearly as clear.  

Policymakers will have to chart a course through hazardous waters, avoiding the temptation to employ dated tools, trampling on individual freedoms, and damaging the global competitiveness of some of America’s most prestigious firms, all while correcting a fragmented and dangerous information environment. The focus should be on the unique capacities of platforms to curate information to their users through algorithms, rather than leaning on outdated policy tools designed for fundamentally different business practices.  

But with a bipartisan collection of state attorneys general and Congress members pushing anti-trust suits against Google and Facebook, policymakers have already been drawn towards tools that are likely ineffective for the task. There are reasons to believe that even if won, these suits will prove to be an ineffective instrument for improving the shape of social media content, other purposes set aside. Some contend that antitrust laws are an outdated tool that fails in the face of the rapid scalability of social media. One of the spawns of a broken-up platform could quickly rise again to market dominance.  Additionally, a more diverse array of platforms does not address the issue of widespread misinformation; more platforms may lead to further fragmentation and misinformation as users seek out platforms that curate content suited to their unique desires. 

Another approach is reforming Section 230 of the Communications Decency Act of 1996, which provides immunity from liability for content created by users. Holding platforms liable for the content posted by users could help incentivize the removal of illegal content, but it comes with some serious drawbacks, as demonstrated with Germany’s 2017 Network Enforcement Act. Human rights groups and critics argue the liability clause incentivizes platforms to remove more content than intended and damages free speech, though the empirical evidence here is mixed. A similar law passed in France was gutted by its Constitutional Council at least in part due to its impact on free speech. Based on the criticism this approach faced in Germany and France, there is good reason to believe such a policy would be overturned in the Supreme Court if it could muster the unlikely political support needed in the first place.  

A more forwardlooking suggestion has been the widespread adoption of middleware, or “software that rides on top of an existing platform and can modify the presentation of underlying data.”While middleware gets to the heart of the issue by augmenting the underlying algorithm, it fails to necessarily improve on it. There is little reason to assume users would choose middleware that factchecks over that which curates conspiracy theories. Rather than being the competitive solution its authors tout it as, middleware would likely be much of the same.  

None of these approaches adequately address the fundamental issue – platforms curate content that maximizes screen time, often promoting misinformation and hate. Policymakers must instead approach this new business model and technology with a new toolkit form-fitted for the job. An important step forward would be the passage of a national data privacy law, creating transparency in how data is used by platform’s algorithms and giving users better control over how their data is used, and subsequently the content that they see. There is mutual interest here between platforms and regulators. Platforms are eager to have a uniform privacy law that would replace the inefficiencies of complying with the piecemeal assortment of state laws that currently exist. While a privacy law will be low on the policy priority list, given the challenges posed by the pandemic, its achievement is closer than ever before and could happen in the next few years. 

* Francis Fukuyama, Barak Richman, Ashish Goel, Foreign Affairs, “How to Save Democracy from Technology: Ending Big Tech’s Information Monopoly” (Jan/Feb 2021), 106-110.