Idle Chatter: 3 Big Misconceptions About Section 230 Of The Communications Decency Act

Regardless of one’s political persuasion, most of us can agree that First Amendment expression is, indeed, a bedrock constitutional principle.

First Amendment expression is a significant pillar of our constitutional freedoms in the United States, and when it comes to free expression online, the protections for vigorous debate over the internet should be no exception. Now, more than ever, online platforms such as Facebook and Twitter are providing incredible means through which to share not only ideas but news and events. The interesting fact is that none other than President Donald Trump himself enjoys using Twitter to directly reach his more than 81 million followers. His tweets, however, are not without controversy, and it seems some of them have now fanned the flames of “censorship” of content (or users) by online platforms, claiming that the social media platform (and others) may be engaging in activity that is eroding the very bedrock principle of First Amendment expression. Whether you agree with him or not, the underlying premise and its context is worth a look, and may even open your eyes to seeing online content liability in a new light.

How this issue came to a head recently is no surprise. After Trump posted a number of tweets on Twitter about potential fraud in mail-in voting, Twitter apparently added an alert within those tweets encouraging users to “[g]et the facts about mail-in ballots.” This drew an immediate and intense response from Trump against Twitter, claiming that “@Twitter is now interfering in the 2020 Election” by relying on fact-checking from “fake news” CNN and The Washington Post. Seems like some robust free expression to me, but the interesting point here is that Twitter itself acknowledged that Trump’s tweets did not violate Twitter’s terms of use and policies, yet Twitter felt obliged to add the warning label. Trump was less than amused — this interaction prompted him to recently sign an executive order directing federal agencies to alter their interpretation of the liability protections afforded internet service providers under Section 230 of the Communications Decency Act.  Interesting, indeed — but for different reasons than you may think.

To those who are not familiar, Section 230 of the Communications Decency Act of 1996 helped shape the internet as it stands today. Under Section 230(c)(1), “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In essence, Section 230 protects internet service providers from being treated like publishers, affording them immunity from liability for the content that is posted on their platforms. Further, Section 230 allows such providers to avoid liability for taking action “in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”  What does this mean? It means that such providers can regulate certain content that meets such criteria without fear of civil liability for removing it.

From my experience with Section 230 since its inception, I find the current debate striking because many policymakers (and many lawyers) seem to misunderstand certain aspects of Section 230 and its application that are affecting the debate. Here are the three biggest misconceptions regarding Section 230 that everyone needs to keep in mind:

  1. Don’t Get Caught Up With “Publisher” And “Platform.” Given the text of Section 230(c)(1) and the jurisprudence prior its enactment, it is easy to fall into the trap of seeing a legal distinction between “platform” and “publisher” and the extent of control over the content; however, this would be in error. The focus should remain on whether a platform is a “speaker” of the content. For example, if someone posted a defamatory reaction (i.e., comment) to an article posted by a staff writer for Yahoo News, then Yahoo News would not be liable for such defamation simply because it posted the comment. On the other hand, if any of Yahoo’s news editors or staff writers posted defamatory content on the Yahoo News website, then Yahoo News could be held liable for such posting because they would be the “information content provider.” For lack of better words, the online platform must not be the originator of the defamatory content at issue for Section 230 immunity to apply.
  2. Copyrights Are NOT The Issue In Section 230. The fact that an internet service provider may store content it does not know to be infringing or otherwise “take down” such content under its policies and procedures and not be held liable for doing so should not be confused with Section 230 immunity. The Digital Millennium Copyright Act (DMCA), and more specifically, Section 512, not only addresses immunity for the transmission and caching of infringing content through automated means, but the requirements for receiving immunity from liability for the storage of infringing content it does not know to be infringing that resides on the platform. Of course, the DMCA is a lot more involved than the thumbnail reference above, but the point is that the DMCA is addressing immunity from liability for actions taken with respect to copyright infringement. Section 230, however, deals with immunity from liability for the posting of defamatory, obscene, excessively violent content, etc., whether or not such material is constitutionally protected.
  3. Section 230 Does NOT Provide Blanket Immunity. Section 230 definitely provides very broad immunity (by design), however, it is not blanket immunity. Section 230 does not, in fact, protect an internet service provider against criminal prosecution under federal statutes. For example, Section 230 does not grant immunity to websites that facilitate and profit from revenge pornography and sextortion, among others. With the enactment of the “Allow States and Victims to Fight Online Sex Trafficking Act” (FOSTA) signed by Trump in 2018, it became illegal for internet service providers to knowingly assist, support, or facilitate sex trafficking as well.  As a result, Section 230 does nothing to immunize an internet service provider from criminal prosecution under such relevant federal statutes.

Regardless of one’s political persuasion, most of us can agree that First Amendment expression is, indeed, a “bedrock” constitutional principle. Does this mean that Twitter’s actions on Trump’s tweets merit a remake of Section 230? At best, Twitter’s action seems ill-advised because it is not something consistently applied across the entire service — the notion of a social media platform potentially “taking sides” is repugnant to our notions of justice and fair play and undermines legitimate discourse. That said, do these facts merit a re-evaluation of Section 230 immunity? Given the broad interpretation of Section 230 by the courts since the law’s enactment, there is a good chance that more restrictive interpretation of Section 230 in line with Trump’s executive order will face an uphill constitutional battle. Perhaps that is the point. Inquiring minds will definitely differ, but the point here is that any debate should maintain the correct perspective on Section 230 and what is does (and does not) do. Anything else is just, well, idle chatter.


Sponsored

Tom Kulik is an Intellectual Property & Information Technology Partner at the Dallas-based law firm of Scheef & Stone, LLP. In private practice for over 20 years, Tom is a sought-after technology lawyer who uses his industry experience as a former computer systems engineer to creatively counsel and help his clients navigate the complexities of law and technology in their business. News outlets reach out to Tom for his insight, and he has been quoted by national media organizations. Get in touch with Tom on Twitter (@LegalIntangibls) or Facebook (www.facebook.com/technologylawyer), or contact him directly at tom.kulik@solidcounsel.com.

Sponsored