The PACT Act Is Not The Solution To The Problem Of Harmful Online Content

4 years ago 148
BOOK THIS SPACE FOR AD
ARTICLE AD

The Senate Commerce Committee’s Tuesday hearing on the PACT Act and Section 230 was a refreshingly substantive bipartisan discussion about the thorny issues related to how online platforms moderate user content, and to what extent these companies should be held liable for harmful user content.

The hearing brought into focus several real and significant problems that Congress should continue to consider. It also showed that, whatever its good intentions, the PACT Act in its current form does not address those problems, much less deal with how to lessen the power of the handful of major online services we all rely on to connect with each other.

EFF Remains Opposed to the PACT Act

As we recently wrote, the Platform Accountability and Consumer Transparency (PACT) Act, introduced last month by Senators Brian Schatz (D-HI) and John Thune (R-SD), is a serious effort to tackle a serious problem: that a handful of large online platforms dominate users’ ability to speak online. The bill builds on good ideas, such as requiring greater transparency around platforms’ decisions to moderate their users’ content—something EFF has championed as a voluntary effort as part of the Santa Clara Principles.

However, we are ultimately opposed to the bill, because weakening Section 230 (47 U.S.C. § 230) would lead to more illegitimate censorship of user content. The bill would also threaten small platforms and would-be competitors to the current dominant players, and the bill has First Amendment problems.

Important Issues Related to Content Moderation Remain

One important issue that came up during the hearing is to what extent online platforms should be required to take down user content that a court has determined is illegal. The PACT Act provides that platforms would lose Section 230 immunity for user content if the companies failed to remove material after receiving notice that a court has declared that material illegal. It’s not unreasonable to question whether Section 230 should protect platforms for hosting content after a court has found the material to be illegal or unprotected by the First Amendment.

However, we remain concerned about whether any legislative proposal, including the PACT Act, can provide sufficient guardrails to prevent abuse and to ensure that user content is not unnecessarily censored. Courts often issue non-final judgments, opining on the legality of content in a motion to dismiss opinion, for example, before getting to the merits stage of a case. Some court decisions are default judgments because the defendant does not show up to defend herself for whatever reason, making any determination about the illegality of the content the defendant posted suspect because the question was not subject to a robust adversarial process. And even when there is a final order from a trial court, that decision is often appealed and sometimes reversed by a higher court.

Additionally, some lawsuits against user content are harassing suits that might be dismissed under anti-SLAPP laws, but not all states have them and there isn’t one that consistently applies in federal court. Finally, some documents that appear to be final court judgments may be falsified, which would lead to the illegitimate censorship of user speech, if platforms don’t spend considerable resources investigating each takedown request.

We were pleased to see that many of these concerns were discussed at the hearing, even if a consensus wasn’t reached. It’s refreshing to see elected leaders trying to balance competing interests, including how to protect Internet users who are victims of illegal activity while avoiding the creation of broad legal tools that can censor speech that others do not like. But as we’ve said previously, the PACT Act, as currently written, doesn’t attempt to balance these or other concerns. Rather, by requiring the removal of any material that someone claims a court has declared illegal, it tips the balance toward broad censorship.

Another thorny but important issue is the question of competition among online platforms. Sen. Mike Lee (R-UT) expressed his preference for finding market solutions to the problems associated with the dominant platforms and how they moderate user content. EFF has urged the government to consider a more robust use of antitrust law in the Internet space. One thing is certain, though: weakening Section 230 protections will only entrench the major players, as small companies don’t have the financial resources and personnel to shoulder increased liability for user content.

Unfortunately, the PACT Act’s requirements that platforms put in place content moderation and response services will only further cement the dominance of services such as Facebook, Twitter, and YouTube, which already employ vast numbers of employees to moderate users’ content. Small competitors, on the other hand, lack the resources to comply with the PACT Act.

Let’s Not Forget About the First Amendment

The hearing also touched upon understandably concerning content categories including political and other misinformation, hate speech, terrorism content, and child sexual abuse material (“CSAM”). However, by and large, these categories of content (except for CSAM) are protected by the First Amendment, meaning that the government can’t mandate that such content be taken down.

To be clear, Congress can and should be talking about harmful online content and ways to address it, particularly when harassment and threats drive Internet users offline. But if the conversation focuses on Section 230, rather than grappling with the First Amendment issues at play, then it is missing the forest for the trees.

Moreover, any legislative effort aimed at removing harmful, but not illegal, content online has to recognize that platforms that host user-generated content have their own First Amendment rights to manage that content. The PACT Act intrudes on these services’ editorial discretion by requiring that they take certain steps in response to complaints about content.

Amidst a series of bad-faith attacks on Internet users’ speech and efforts to weaken Section 230 protections, it was refreshing to see Senators hold a substantive public discussion about what changes should be made to U.S. law governing Internet users’ online speech. We hope that it can serve as the beginning of a good-faith effort to grapple with real problems and to identify workable solutions that balance the many competing interests while ensuring that Internet users continue to enjoy the diverse forums for speech and community online.

Read Entire Article