Skip to content

Should the Govt Restrict Online Speech?

There’s no doubt that certain kinds of online conduct are reprehensible. But that doesn’t mean we should disregard the First Amendment.

Photo by Rami Al-zayat / Unsplash

Rachel Chiu
Rachel Chiu is a JD candidate at Yale Law School and a Young Voices contributor focused on online speech and technology policy.

The First Amendment prohibits the federal government from suppressing speech, including speech it deems ‘harmful,’ yet lawmakers keep trying to regulate online discourse.

Over the summer, the Senate passed the Kids Online Safety Act (KOSA), a bill to allegedly protect children from the adverse effects of social media. Senate Majority Leader Chuck Schumer took procedural steps to end the debate and quickly advance the bill to a floor vote. According to Schumer, the situation was urgent. In his remarks, he focused on the stories of children who were targets of bullying and predatory conduct on social media. To address these safety issues, the proposed legislation would place liability on online platforms, requiring them to take ‘reasonable’ measures to prevent and mitigate harm.

It’s now up to the House to push the bill forward to the president’s desk. After initial concerns about censorship, the House Committee on Energy and Commerce advanced the bill in September, paving the way for a final floor vote.

KOSA highlights an ongoing tension between free speech and current efforts to make social media ‘safer.’ In its persistent attempts to remedy social harm, the government shrinks what is permissible to say online and assumes a role that the First Amendment specifically guards against.

At its core, the First Amendment is designed to protect freedom of speech from government intrusion. Congress is not responsible for determining what speech is permissible or what information the public has the right to access. Courts have long held that all speech is protected unless it falls within certain categories. Prohibitions against harmful speech – where “harmful” is determined solely by lawmakers – are not consistent with the First Amendment.

But bills like KOSA add layers of complexity. First, the government is not simply punishing ideological opponents or those with unfavorable viewpoints, which would clearly violate the First Amendment. When viewed in its best light, KOSA is equally about protecting children and their health. New York had similar public health and safety justifications for its controversial hate speech law, which was blocked by a district court and is pending appeal. Under this argument, which is often cited to rationalize speech limitations, the dangers to society are so great that the government should take action to protect vulnerable groups from harm. However, the courts have generally ruled that this is not sufficient justification to limit protected speech.

In American Booksellers Association v Hudnut (1986), Judge Frank Easterbrook evaluated the constitutionality of a pornography prohibition enacted by the City of Indianapolis. The city reasoned that pornography has a detrimental impact on society because it influences attitudes and leads to discrimination and violence against women. As Judge Easterbrook wrote in his now-famous opinion, just because speech has a role in social conditioning or contributes loosely to social harm does not give the government license to control it. Such content is still protected, however harmful or insidious, and any answer to the contrary would allow the government to become the “great censor and director of which thoughts are good for us.”

In addition to the protecting children argument, a second layer of complexity is that KOSA enables censorship through roundabout means. The government accomplishes what it is barred from doing under the First Amendment by requiring online platforms to police a vast array of harms or risk legal consequences. This is a common feature of recent social media bills, which place the responsibility on platforms.

Practically, the result is inevitably less speech. Under KOSA, the platform has a ‘duty of care’ to mitigate youth anxiety, depression, eating disorders, and addiction-like behaviors. While this provision focuses on the covered entity’s design and operation, it necessarily implicates speech since social media platforms are built around user-generated posts, from content curation to notifications. Because platforms are liable for falling short of the ‘duty of care,’ this requirement is bound to sweep up millions of posts that are protected speech, even ordinary content that may trigger the enumerated harm. While the platform would technically be the entity implementing these policies, the government would be driving content removal.

Ultimately, the fixation on harm does little to justify speech limitations. Legislation that reduces legal speech to promote a larger, social good is still a vehicle for the government to become, as Judge Easterbrook wrote, “great censors.”

This article was originally published by the Foundation for Economic Education.

Latest

The Good Oil Daily Roundup

The Good Oil Daily Roundup

Just a brief note to readers who like to add their own contributions to Daily Roundup in the comments. This post is for family friendly humour ONLY thank you.

Members Public
Good Oil Backchat

Good Oil Backchat

Please read our rules before you start commenting on The Good Oil to avoid a temporary or permanent ban.

Members Public