The Facebook verdict is not about Section 230
Ryan Broderick, writing at Garbage Day about this past week’s New Mexico court decision against Facebook and YouTube:
But I am more than a little freaked out by what New Mexico attorney general Raúl Torrez said in an interview with CNBC following the verdict.
“One of the things that I am really focused on is how we can change the design features of these products, at least within New Mexico,” he said. “And that would create a standard that could then be modeled elsewhere in the country, and, frankly, around the world.” No!!! New Mexico should not be telling the whole world how to do anything.
Anyways, Meta’s legal loss in New Mexico this week is part of a broader attack on digital freedoms in the US right now. There are age verification laws popping up in states across the country, bills that would eliminate end-to-end encryption entirely, and the Kids Online Safety Act (KOSA), which would force platforms to censor content for minors, which would mean building tools that could determine who is a minor and who is not. All of these bills are meant to bypass or dismantle Section 230, the statute that basically makes the internet in the US work.
No, sorry—this comparison is just wrong.
I don’t mean to pick on Broderick, because I have been seeing a bunch of these “But this will kill Section 230!” reactions from Very Online People this week, but this is a perfect example.
While I am not a lawyer, my understanding of Section 230 is that it generally exempts online service providers and platform owners from responsibility for what their users post. It is what prevents people from suing an ISP because someone hosts a shitty blog on their service, or suing Facebook because someone writes a defamatory post. On the whole, that is a good thing and I agree that yes, we ought to keep it around. I also agree that KOSA-type legislation that forces censorship in the name of “Won’t someone PLEASE think of the children!” is a bad idea, as are all the attempts to outlaw encryption and VPNs that are currently floating around.
But what happened in New Mexico is not like those other things. Facebook did not get held liable for user-generated content that it was hosting. Facebook was held liable for designing a deliberately addictive and harmful product. The plaintiffs very deliberately used the same legal strategy that was successful against tobacco companies; these companies knew they had a harmful product, they hid evidence of that harm, they deliberately made it more harmful and addictive, and they hid evidence that they were doing so.
So, no—this verdict is not about Section 230. What I find interesting is that so many people who should know better are conflating these two very different issues.
I think maybe it goes back to a general unwillingness—either deliberate or otherwise—to acknowledge any responsibility on the part of tech companies for harms caused by their products. The tech industry and the people who cover it are very invested in the idea that these products and platforms are simply tools with no inherent politics or ideology and that they should therefore be held entirely blameless for the effects they have on the world.