California AI Transparency Act Passes State Senate, Moves to Assembly

California Senate Bill 942 — aka the California AI Transparency Act (CAITA) aka SB 942 — passed the California State Senate by an impressive and bipartisan vote of 32-1 on May 21, 2024.  It now moves on to the State Assembly where it will first go before the Assembly Privacy and Consumer Protection Committee in the coming month or so. The bill is modeled in part on Senator Schatz's and Kennedy’s proposed AI Labeling Act. State Senator Josh Becker is the sponsor and author of this bill, and I am pleased to have played a part in it by proposing the outline of this bill to State Senator Josh Becker, co-drafted it, and testified on its behalf in two different committees. In this blog post I will provide a summary of the bill by quoting Senator Becker and recent analyses of the bill.

 As noted in this press release from Senator Becker:

This bill requires large generative artificial intelligence (GenAI) system providers to label AI-generated images, videos, and audio created by their models with visible and imperceptible, embedded disclosures. It also provides an AI-detection tool for users to query whether content was created by AI, and enforce third-party licensees to the extent technically feasible to prevent undisclosed content publication.

Senator Becker further notes:

CAITA addresses the growing influence of AI in product creation and ensures that consumers have the information they need to make informed choices. As AI continues to play a pivotal role in various industries, the bill emphasizes the importance of disclosure and consumer awareness. CAITA marks a significant step toward establishing clear guidelines for AI-generated products, setting a precedent for other states and jurisdictions to follow.

So how does the bill provide this transparency?  As noted in the Senate Floor Analysis:

Requires a covered provider to include in AI-generated image, text, video, or multimedia content created by its own generative AI system a visible disclosure that meets all of the following criteria:

a) The disclosure shall include a clear and conspicuous notice, as appropriate for the medium of the content, that identifies the content as generated by AI, such that the disclosure is not avoidable, is understandable to a reasonable person, and is not contradicted, mitigated by, or inconsistent with anything else in the communication.

b) The disclosure shall, to the extent technically feasible, be permanent or difficult to remove.

c) The output’s metadata information shall include an identification of the content as being generated by AI, the identity of the tool used to create the content, and the date and time the content was created.

But what if someone removes the “label” from the AI-generated content? This bill also has an AI detection requirement that does the following:

Requires a covered provider to create an AI detection tool by which a person can query the covered provider as to the extent to which text, image, video, audio, or multimedia content was created, in whole or in part, by a generative AI system provided by the covered provider.

The bill also addresses downstream use cases, namely by doing the following:

Requires a covered provider to implement reasonable procedures to prevent downstream use of a generative AI system it provides without the disclosure required by this section, including the following:

a) Contractually requiring third-party licensees of the generative AI system to refrain from removing a required disclosure.

b) Terminating access to the generative AI system when the covered provider has reason to believe that a third-party licensee has removed a required disclosure.

In summary, it is becoming increasingly difficult to know if content is human-generated or machine-generated (i.e., “synthetic”). As U.S. Senator Brian Schatz (D-Hawai‘i) noted when commenting on the downside of GenAI: “People deserve to know whether or not the videos, photos, and content they see and read online is real or not.”  CAITA aka SB 942 will go along way in making that a reality.

Previous
Previous

APRA Draft 2 Still Falls Far Short of California’s Data Broker Law

Next
Next

Analyzing the Arguments For and Against Preemption in APRA