Despite all the furor, the future of the Internet does not depend on two cases argued this week before the U.S. Supreme Court. There is no risk that the statutory immunity that Congress long ago granted to Internet service providers will collapse. The judges are asked to decide a narrow and technical legal question. If ISPs lose, they will make some changes to the algorithms they use to sort content. Most users’ experience will hardly change. The two cases that triggered these dire predictions involve lawsuits against Google and Twitter, respectively. The lawsuits were filed by families who lost loved ones to heinous terrorist acts. The central allegation is that the companies encouraged these acts through the videos and other materials they made available to users. Judges are not asked to decide whether the allegations are true, but whether the cases should go to trial, in which case the jury would determine the facts.
Google is being sued based on the recommendations YouTube’s algorithms make to users in the familiar “to follow” box. Twitter is accused of not making enough effort to remove pro-terrorist posts. The question of immunity is only clearly raised in the case of Google. But since a Google victory would almost certainly prevent the suit against Twitter, the immunity argument is worth examining in detail.
The relevant question before the court is how to interpret Section 230(c)(1) of the Communications Decency Act, passed by Congress in 1996, after a New York court held an ISP liable for alleged defamatory content posted on a discussion forum he hosted. .
The text is simple: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of information provided by another information content provider. » When commentators talk about ISP legal immunity, this is the main provision they have in mind.
Here’s how the law works: if I upload a video to YouTube, I am the content provider, but YouTube is neither the participant nor the publisher. Therefore, if my video causes harm (e.g. defamation), YouTube is not responsible.
Sounds simple, right? But now we come to what the judges must decide: If Google creates an algorithm that recommends my harmful video to you, is the video still provided by “another” provider, or is the provider now YouTube itself? Or, in the alternative argument, does the algorithm’s recommendation turn Google into the publisher of the video? Either interpretation of the law would allow plaintiffs to circumvent legal immunity.
These are not easy questions to answer. But these are also not political issues that should be referred to Congress. They concern nothing other than the ordinary and daily work of the courts, the determination of the meaning of a law susceptible of more than one interpretation.
In fact, courts have often ruled on the limits of Section 230 immunity. In perhaps the best-known example, the U.S. Court of Appeals for the 9th Circuit ruled in 2008 that Section offered no protection to a roommate matching site that required users to answer questions that housing applicants could not legally ask. The questions, the court wrote, made the site “the developer, at least in part” of the content at issue.
In the Google case, on the other hand, the 9th Circuit held that the selection algorithm is only a tool for users to find the content they want, based on what they themselves have consulted or searched. The use of the algorithm did not make Google the creator or developer of the ISIS recruitment videos that are the centerpiece of the case, because the company did not materially contribute to “the ‘illegality’ of these videos. In his dissent, Justice Ronald Gould ruled that the plaintiffs should be allowed to go to trial over their allegations that Google “knew that ISIS and its supporters were inserting propaganda videos into their platforms” and should share responsibility. legal because YouTube, through its selection algorithms, “amplified and amplified these communications.”
During oral argument in the Google case, Justice Ketanji Brown Jackson questioned whether ISPs were overturning Section 230. The provision was written, she said, to allow companies to block certain offensive content . How, she asked, was it “conceptually consistent with Congressional intent” to use this section as a shield to promote offensive materials?
The answer depends on whether using an algorithm to decide what content to recommend is like telling the user “This is a great thing and we fully support it!” » Here, my own view is that Big Tech has the upper hand in the argument. But the matter is extremely tight. And I certainly don’t think a court ruling against ISPs would bring down the sky.
Google warns in its brief that if the plaintiffs’ interpretation of Section 230 prevailed, the company would have no way to sort and categorize third-party videos, let alone decide which, if any, to recommend to a given user . And the company goes further: “Hardly any modern website would work if users had to sort through content themselves. »
Good points! But not as good as they would be if the company’s YouTube subsidiary, along with other ISPs, hadn’t spent so much time in recent years tweaking algorithms to address government objections to content recommended to users. In other words, if the ISPs lost, I think they would be fine.
I suspect that what worries ISPs is less the potential complexity of complying with reduced immunity than the flood of lawsuits, many without merit, that would surely ensue. This is a real concern – and unlike the correct interpretation of a law, it’s exactly the kind of problem we might want Congress to address.
© 2023 Bloomberg LP