Table of Contents
Isaac Schick
Isaac Schick is a policy analyst at the American Consumer Institute, a nonprofit education and research organization.
Recently the Supreme Court avoided deciding on where the limits of Section 230 were concerning content algorithms. Many have interpreted this as a positive sign that Section 230 will continue to protect private platforms’ ability to moderate content as they see fit. Some Republican lawmakers have attacked tech platforms for holding political biases and advocated for state-enforced “neutral moderation.” These critics of supposedly biased content moderation assume that a “neutral” political stance exists and would not simply be a stand-in for the state’s bias.
Section 230 of the Communication Decency Act of 1996 protects online entities from liability for user-generated activity on their site. It allows businesses and platforms to interact with others online without fear that something a third party did or said could make them liable. Ironically, without Section 230, content moderation would be far greater to prevent potentially liable comments and activity. Despite how some politicians feel, section 230 protections apply to all forms of online entities, including some that we may not consider, like retail product reviews and comments on personal blogs.
Sen. Ted Cruz (R-TX) made the implication while questioning Mark Zuckerberg before the Senate that because Facebook was not “content-neutral,” it violated a tenant on Section 230 protections. The accusation stems from a notion, popular in certain political circles, that online platforms only receive Section 230 protections if they remain politically neutral in content moderation. As legal experts have clarified, the suggestion has no basis in the law. Section 230 empowers online platforms to moderate their content in “good faith,” which includes any content the provider or users deems “objectionable, whether or not such material is constitutionally protected.”
Despite the absence of a “content neutrality” clause in the text of the law, some Senators have proposed adding it, such as Sen. Josh Hawley (R-MO). The Ending Support for Internet Censorship Act of 2019 would have mandated online platforms apply for government certifications of content neutrality before receiving liability protections. Though the act received little support, it is clear that a portion of Republican lawmakers believes in curtailing liberal content moderation on “Big Tech” platforms through state intervention.
Beyond the First Amendment issues these attacks raise, there are practical problems. Would a mandate for content neutrality apply to every online entity, and if not, how do we determine who it applies to? Plenty of non-Big Tech sites rely on Section 230 to moderate their content, and many likely have a bias towards one political persuasion or another. It is unreasonable to expect the federal government to mandate a small retail store apply for content neutrality certification to receive liability protection from product reviews.
To assume that only large social media platforms must maintain politically neutral content moderation would codify a “big is bad” two-tier legal system. No longer would only platforms compete in a free market, but instead, policymakers would set legal applications based on company size. The “bigness” that lawmakers determine to require moderation neutrality is inherently political, as narrow limits could only affect Big Tech companies like Facebook, Google, and Twitter, and looser limits could include Reddit, Wikipedia, etc.
Furthermore, what determines something being sufficiently politically neutral? Section 230 allows for sites to interpret “objectionable” content independently because it is an inherently subjective decision. Comments critical of homosexuality from a biblical perspective could be viewed as “objectionable” to a site that wishes to garner a homosexual user base. Likewise, a conservative Christian site may view comments that support transgender youth as objectionable. For the government to impose a form of content neutrality would only lead to the state’s version of “objectionable” becoming the standard.
Elon Musk recently stated that on Twitter, the word “cis” would be considered a slur. Not everyone agrees that cis should be an “objectionable” word, such as Facebook, which does not view “cis” as a slur but has considered misgendering something that warrants moderation. If we imagined that Twitter and Facebook would apply for a certificate of neutrality, the political leanings of the certification committee would likely play a massive role in which content moderation was deemed “neutral” and which was “politically biased.”
For these reasons, the First Amendment is a protection from the government, not an imposition by it. The state is not an inherently neutral actor, and any action to impose neutrality will be fraught with political opportunism (likely violating the First Amendment). Though efforts to impose neutrality on online platforms have died down, they are likely not dead, as politicians will always be incentivized to exert power over the public square.
https://www.youtube.com/embed/IO3IIBUvoqc?enablejsapi=1&origin=https%3A%2F%2F fee.org
This article was originally published on FEE.org. Read the original article.