The bipartisan bill purportedly seeks to hold companies accountable for harm, but it’s unclear whether Section 230 even applies to AI.
U.S. Sens. Josh Hawley, a Republican from Missouri, and Richard Blumenthal, a Democrat from Connecticut, introduced a Senate bill on June 14 that would eliminate special protections for artificial intelligence (AI) companies that are currentl afforded to online computer services providers under the Communications Decency Act of 1996 (CDA).
Section 230 refers to text found in Title 47, Section 230 of the CDA. It specifically grants protection to online service providers from liability for content posted by users. It also gives providers immunity from prosecution for illegal content, provided good faith efforts are made to take down such content upon discovery.
Opponents of Section 230 have argued that it absolves social media platforms and other online service providers of responsibility for the content they host. The U.S. Supreme Court recently ruled against changing Section 230 in light of a lawsuit in which plaintiff’s sought to hold social media companies accountable for damages sustained through the platform’s alleged hosting and promotion of terrorist-related content.
Per the high court’s opinion, a social media site can’t be held accountable for the suggestions made by the algorithms it uses to surface content any more than an email or cellular service provider can for the content transmitted via their services.
It’s unclear at this time, however, whether Section 230 actually applies to generative AI companies such as OpenAI and Google, makers of ChatGPT and Bard, respectively.
During a recent Senate hearing, OpenAI CEO Sam Altman told U.S. Sen. Lindsey Graham that it was his impression that Section 230 didn’t apply to his company. When pressed by Hawley, who asked Altman what he thought about a hypothetical situation where Congress “opened the courthouse doors” and allowed people who were harmed by AI to testify in court, the CEO responded, “Please forgive my ignorance, can’t people sue us?”
While there’s no specific language covering generative AI in Section 230, it’s possible further discussions about its relevance to generative AI technologies could come down to the definition of “online service.”
The GPT API, for example, underpins countless AI services throughout the cryptocurrency and blockchain industries. If Section 230 applies to generative AI technologies, it might prove difficult to hold businesses or individuals accountable for harms resulting from misinformation or bad advice generated via AI.