Whether or not Section 230 applies to Artificial Intelligence (AI) is a hotly debated question. Somewhat surprisingly, the authors of Section 230 claimed it may not apply, but it is likely more complicated than just a simple yes or no answer. Section 230 has been critical to how the internet has expanded free speech online by creating a market that provides opportunities for users to speak, as well as reflecting core principles about the ability of private platforms to make decisions about their services.
Legislating an AI carveout from Section 230, however, would have much deeper consequences for both online speech as we already experience it as well as the future development of AI.
A Refresher on Section 230 and What It Tells Us About the Debate on If It Applies to AI
The basic text of Section 230 reads, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In analyzing whether Section 230 applies to AI, we should go back to the text as drafted.
Generative AI likely is an interactive computer service but there may remain some debate on who is the speaker when a generative AI produces content. (A debate that is also playing out involves questions about the application of certain intellectual property principles.) However, these questions won’t often be of concern. Most questions about AI and Section 230 are not about the mere production of content or an image, but rather involve a user reposting the content on other platforms or are otherwise connected to the content generated by a user.
Removing Section 230 for AI Would Have Far More Significant Consequences
While generative AI services like DALL‑E and ChatGPT gained popularity in 2022, AI has been used in many ways, including popular user‐generated content features, for much longer. As a result, attempts to remove Section 230 protection for AI through legislation would likely impact much more content than the narrower subsection of generative AI services.
AI, including generative AI, is already used in many aspects of online services such as social media and review sites. AI can help identify potential SPAM content and improve search results for a specific user. Beyond that, it is also already used in many popular features.
For example, removing protection for AI could eliminate commonly used filters on social media photo sites and even raise questions about the use of certain features that could help generate captions for videos. This would be considered the type of user‐generated content and creativity supported by Section 230. But an AI exception would likely lead many platforms to disable such tools rather than risk opening themselves up to increased liability.
An AI exception to Section 230 also undermines much of the framework and solution intended by the law. First, it would shift away from the American approach that has encouraged innovators to offer creative tools to users by punishing the same innovators for what others may do with those tools. This undermines the basis of Section 230 and hampers innovation that could be beneficial. It also misguidedly moves the responsibility from bad actors to innovators.
Some critics may still debate how Section 230 applies to specific elements of generative AI services, but a Section 230 AI carveout would bring more problems than solutions. AI already interacts with a wide array of user‐generated content and such a loophole would have a broad impact both on the current experience of users on the internet and on the future development of AI.