Accusations that prominent online platforms are stifling conservative voices have been much in the news lately—accusations that have turned into threats of investigations or enforcement actions. In late August, President Trump tweeted that various platforms are “suppressing voices of Conservatives,” pledging that this “very serious situation will be addressed.” In September, former Attorney General Jeff Sessions convened a meeting of state attorneys general to discuss whether the platforms are “hurting competition” and “intentionally stifling the free exchange of ideas.”
The companies in question have forcefully denied any anti-conservative bias in the operation of their algorithms or application of their community standards. Their algorithms, they say, are neutral tools for sorting and classifying information online, and their standards aim to create a safe environment, not to squelch particular views.
If litigation or enforcement nonetheless materializes, however, among the most significant issues that regulators, litigants, and courts will confront is whether the First Amendment prohibits second-guessing the platforms’ decisions about what content to disseminate. (Disclosure: The authors of this article represent various online providers in matters presenting these and similar issues.)
The First Amendment Argument
The Supreme Court has long recognized that the First Amendment protects not only the right to publish one’s own speech but also the “exercise of editorial control and judgment” that, for example, a newspaper undertakes in deciding whether to publish third-party submissions. Miami Herald Pub. Co. v. Tornillo, 418 U.S. 241, 258 (1974).
Online service providers, the argument goes, exercise the same editorial judgment when adopting and enforcing community standards for speech on their platforms, or when designing and applying algorithms to filter and classify that speech. Such judgments are simply the digital-economy version of a bookstore or newsstand deciding which books or magazines to carry, or a cable operator assessing whether and when to air particular programming—and are just as entitled to First Amendment protection as those decisions.
Accusations that anti-conservative bias motivates these judgments in no way diminish the platforms’ First Amendment rights. “[W]hether fair or unfair,” the Supreme Court held in Miami Herald, the First Amendment protects the right to choose what “material” to present. Indeed, if the accusations were accurate (contrary to what the companies have maintained), that would strengthen the First Amendment argument because such “political expression” triggers the most robust level of First Amendment protection. See McIntyre v. Ohio Elections Comm’n, 514 U.S. 334, 346 (1995).
Against the First Amendment
Critics of this First Amendment argument respond that online platforms are not distributors of other people’s speech (the way newspapers, bookstores, and cable operators are), but rather operators of a public forum for expression. For these critics, the key precedent is not Miami Herald but Pruneyard Shopping Center v. Robins, in which the Supreme Court held that the First Amendment did not prevent a privately-owned shopping mall from being forced, under state law, to permit members of the public to solicit signatures and distribute political pamphlets in the concourses of the mall. 447 U.S. 74 (1980).
According to this counterargument, the real free-speech right at stake is that of online users who lose access to these public forums. The only free-speech interest that the platforms possess, this viewpoint holds, is that the public might mistakenly attribute a message written by a user to the platform itself. As with the mall in Pruneyard, though, that limited speech interest can be accommodated fully by online providers “publicly dissociat[ing] themselves from the views of the speakers” they host. 447 U.S. at 88.
Generally, courts that have confronted these or similar issues have affirmed that online service providers have a robust First Amendment right to decide how best to arrange and display (or not display) third-party content on their platforms. See, e.g., Zhang v. Baidu.com, Inc., 10 F. Supp. 3d 433 (S.D.N.Y. 2014). Though they now arise in a new, more highly charged context, we expect courts to answer these questions in the same way as the Southern District of New York did in Zhang v. Baidu here: Second-guessing platforms’ algorithms or community standards would “‘violate[] the fundamental rule of protection under the First Amendment, that a speaker has the autonomy to choose the content of his own message.’” Id. (quoting Hurley v. Irish-American Gay, Lesbian, & Bisexual Group of Boston, 515 U.S. 557 (1995)).
-----------
Named the 2019 Best Lawyers “Lawyer of the Year” for Government Relations Practice in Washington, D.C., Jamie Gorelick is the chair of the Regulatory and Government Affairs Practice at WilmerHale, where she represents organizations and individuals on a wide array of high-stakes regulatory and enforcement matters, involving issues as diverse as antitrust, cybersecurity, and the First Amendment. She was one of the longest-serving deputy attorneys general of the United States, general counsel of the Defense Department, and a member of the bi-partisan “9/11 Commission.”
Ari Holtzblatt is a counsel in the Appellate and Government Regulatory Litigation practices at WilmerHale in Washington, D.C., where he represents organizations and individuals in high-profile litigation at every level of the federal system, from the trial court to the U.S. Supreme Court. He has litigated cutting-edge issues for leading technology companies, including under the First Amendment, the Communications Decency Act, the Copyright Act, and the Stored Communications Act.