Abstract
This paper argues that section 230 of the Communications Decency Act should not protect tech companies for their role in behavioral advertising and designing and using algorithms that ensure the spread of dangerous content, including ISIS and far right-wing recruiting videos, propaganda, and other harmful misinformation. Under the broad reading of 230, I argue that tech companies are serving two roles and getting immunity for both: they provide the blank medium, and they propel ideologically bundled snippets of information to those most vulnerable to absorbing the radical viewpoints in them. These two roles are distinct, and the second role has little to do with free speech or creating a level playing field for public discourse. Search engines, and social media like TikTok, Facebook, Instagram, Snap, Reddit, and YouTube are not blank slates for posting content, they are ecosystems with complex designs. Their business plans and algorithms lead to a “rabbit-hole effect” that poses danger. There are public policy approaches that could reduce harm and offer solutions compatible with free speech and a narrower 230 immunity.