Explainer: How Letting Platforms Decide What Content To Facilitate Is What Makes Section 230 Work

This shouldn't be this difficult.

There seems to be some recurrent confusion about Section 230: how can it let a website be immune from liability for its users’ content, and yet still get to affect whether and how that content is delivered? Isn’t that inconsistent?

The answer is no: platforms don’t lose Section 230 protection if they aren’t neutral with respect to the content they carry. There are a few reasons, one being constitutional. The First Amendment protects editorial discretion, even for companies.

But another big reason is statutory, which is what this post is about. Platforms have the discretion to choose what content to enable, because making those moderating choices is one of the things that Section 230 explicitly gives them protection to do.

The key here is that Section 230 in fact provides two interrelated forms of protection for Internet platforms as part of one comprehensive policy approach to online content. It does this because Congress actually had two problems that it was trying to solve when it passed it. One was that Congress was worried about there being too much harmful content online. We see this evidenced in the fact that Section 230 was ultimately passed as part of the “Communications Decency Act,” a larger bill aimed at minimizing undesirable material online.

Meanwhile Congress was also worried about losing beneficial online content. This latter concern was particularly acute in the wake of the Stratton Oakmont v. Prodigy case, where an online platform was held liable for its user’s content. If platforms could be held liable for the user content they facilitated, then they would be unlikely to facilitate it, which would lead to a reduction in beneficial online activity and expression, which, as we can see from the first two subsections of Section 230 itself, was something Congress wanted to encourage.

To address these twin concerns, Congress passed Section 230 with two complementary objectives: encourage the most good content, and the least bad. Section 230 was purposefully designed to achieve both these ends by providing online platforms with what are ultimately two complementary forms of protection.

The first is the one that people are most familiar with, the one that keeps platforms from being held liable for how users use their systems and services. It’s at 47 U.S.C. Section 230(c)(1).

Sponsored

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

It’s important to remember that all this protection provision does is say that the platform cannot be held liable for what users do online; it in no way prohibits users themselves from being held liable. It just means that platforms won’t have to be afraid of its users’ online activity and thus feel pressured to overly restrict it.

Meanwhile, there’s also another lesser-known form of protection built into Section 230, at 47 U.S.C. Section 230(c)(2). What this protection does is also make it safe for platforms to moderate their services if they choose to. Because it means they can choose to.

No provider or user of an interactive computer service shall be held liable on account of (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

Some courts have even read subsection (c)(1) to also cover these moderation decisions too. But ultimately, the wisdom of Section 230 is that it recognizes that to get the best results – the most good content and also the least bad – it needs to ensure platforms can feel safe to do what they can to advance both of these things. If they had to fear liability for how they chose to be platforms, they would be much less effective partners in achieving either. For instance, if a platform had to fear legal consequences for removing user content, they simply wouldn’t. (We know this from FOSTA, which, by severely weakening Section 230 has created disincentives for platforms to try to police user content.) And if platforms had to fear liability for enabling user activity on its systems, they also wouldn’t do that either. They would instead end up engaging in undue censorship, or cease to exist at all. (We also know this is true from FOSTA, which, by weakening Section 230, has driven platforms to censor wide swaths of content, or even cease to provide platform services to lawful expression.)

Sponsored

But even if Section 230 protected platforms for only one of these potential forms of liability, not only would it not be nearly as effective at achieving Congress’s overall goal of getting both the most good and least bad online as protecting them in both ways would, but it wouldn’t be nearly as effective for achieving even just one of those outcomes as a more balanced approach would. The problem is that if ever platforms find themselves in the position of needing to act defensively, out of fear of liability, it tends to undermine their ability to deliver the best results on either of these fronts. The fear of legal liability forces platforms to divert their resources away from the things they could be doing to best ensure they facilitate the most good, and least bad, content and instead spend them on only what will protect them from whatever the threat of legal liability is causing them to spend outsized attention on.

As an example, see what happens under the DMCA, where Section 230 is inapplicable and liability protection for platforms is so conditional. Platforms are so fearful of copyright liability that this fear regularly causes them to overly delete lawful, and even often beneficial, content, despite such a result being inconsistent with Congress’s legislative intent, or waste resources weeding out the bad takedown demands. It’s at least fortunate that the DMCA expressly does not demand that platforms actively police their users’ content for infringement. Because if they had to spend their resources policing content in this way it would come at the expense of policing their content in a way that would be more valuable to the user community and public at large. Section 230 works because it ensures that platforms can be free to devote their resources to being the best platforms they can be to enable the most good and disable the most bad content, instead of having to spend them on activities that are focused only what protects them from liability.

To say, then, that a platform that monitors user content must then lose its Section 230 protection is simply wrong, because Congress specifically wanted platforms to do this. Furthermore, even if you think that platforms, even with all this protection, still don’t do a good enough job meeting Congress’s objectives, it would still be a mistake to strip them of them of what protection they have, since removing it will not help any platform, current or future, from ever doing any better.

What tends to confuse people is that curating user content appearing on a platform does not turn the content into something the platform should now be liable for. When people throw around the imaginary “publisher/platform” distinction as a basis for losing Section 230 protection they are getting at this idea that by exercising editorial discretion over the content appearing on their sites it somehow makes the content become something that the platforms should now be liable for.

But that’s not how the law works. Nor how could it work. And Congress knew that. At minimum, platforms simply facilitate way too much content for them to be held accountable for any of it. Even when they do moderate content, it is still often at a scale beyond which it could ever be fair or reasonable to hold them accountable for whatever still remains online.

Section 230 never required platform neutrality as a condition for a platform getting to benefit from its protection. Instead, the question of whether a platform can benefit from its protection against liability in user content has always been contingent on who created that content. So long as the “information content provider” (whoever created the content) is not the “interactive computer service provider” (the platform), then Section 230 applies. Curating, moderating, and even editing that user content to some degree doesn’t change this basic equation. Under Section 230 it is always appropriate to seek to hold responsible whomever created the objectionable content. But it is never ok to hold liable the platform they used to create it, which did not.

Explainer: How Letting Platforms Decide What Content To Facilitate Is What Makes Section 230 Work

More Law-Related Stories From Techdirt:

No, Your Kid Isn’t Growing Horns Because Of Cellphone Use
Can’t Have Copyright Enforcement Without Destroying Privacy Protections
Kim Kardashian Deep Fake Video Removed By Copyright Claim