California AI Transparency Act (SB 942) Passes Initial State Assembly Tests

Yesterday, I traveled to a very hot Sacramento (110 degrees!) and testified to the State Assembly Judiciary Committee on behalf of State Senator Josh Becker’s California AI Transparency Act (CAITA), which is Senate Bill 942 (SB 942). It passed by a healthy 9-1 vote, having earlier passed the Assembly Privacy Committee by a vote of 8-0 (reminder that the bill sailed through the State Senate). It now moves to the Assembly Appropriations Committee, which is headed by Assemblymember Buffy Wicks, who has a comparable bill (AB-3211, aka the California Provenance, Authenticity and Watermarking Standards). So, it should be interesting to see if California’s AI labeling efforts merge or if the two parties move forward with one bill or the other.

Senator Becker and Tom Kemp testifying on behalf of California Senate Bill 942 aka the California AI Transparency Act

Evolution of SB 942

The bill has evolved since it passed the Senate. Before I discuss what changed, here is a summary of the bill via the Assembly Judiciary Committee’s bill analysis:

The bill mandates that AI-generated content must include provenance disclosures to verify its authenticity and origin. Additionally, it requires developers to provide tools that enable users to detect AI-generated content. The bill also seeks to ensure that those that generate open-source GenAI products police the third party licensees who use the software. To ensure targeted comprehensive coverage under this bill, any person or entity that produces a GenAI system with over 1,000,000 monthly users accessible within California must follow the provisions of the bill. This applies to any AI system that produces synthetic content, including text, images, videos, and audio, emulating the training data's structure and characteristics. This definition is crucial as it specifically targets the types of AI that can create misleading or harmful content, ensuring major players in the AI industry are regulated without overburdening smaller developers. These measures aim to enhance transparency and accountability, ensuring that consumers are informed about the nature of the content they encounter online.

The bulk of the changes to the bill came in the Privacy Committee, where the following major changes were made from the Senate version of the bill: (a) the bill no longer applies generated text, (b) manifest disclosures are now optional, and (c) If a covered provider knows that a third-party licensee modified a licensed GenAI system such that it is no longer capable of including a disclosure the covered provider shall revoke the license within 72 hours of discovering the licensee’s action.

Manifest vs. Latent Disclosure

It’s important to delve deeper into the concept of 'manifest' versus 'latent' disclosures, as this distinction is a key aspect of the bill's requirements. As per the Assembly Judiciary Committee's analysis:

Under this bill, AI-generated content must include machine-readable disclosures with information such as the provider’s name, the GenAI system version, creation date, and a unique identifier. This requirement ensures that the origin of the content can be traced back to the specific GenAI system responsible for its creation. These “disclosures” are to be included in content produced by GenAI systems. The measure envisions two means to implement disclosures required by the bill – manifest (visible), and latent (imperceptible to the human eye). Disclosures should be permanent or difficult to remove and detectable by the provider's own systems. This measure ensures that even if visible labels are removed, the content can still be identified as AI-generated through technical means and the use of the latent disclosure. Providers must also offer users the option to include visible disclosures in AI-generated content, indicating it was created by AI. Visible disclosures provide immediate transparency, allowing users to easily recognize AI-generated content. The bill describes the minimum information that these disclosures must convey:

* The name of the covered provider.

* The name and version number of the GenAI system used.

* The time and date of the content’s creation or alteration.

* Which parts of the content were created or altered by the GenAI system.

* A unique identifier.

AI Detection Tool

As I mentioned previously, I proposed the ideas behind this bill to Senator Becker and co-drafted it. To be clear, part of the bill was based on the ideas found in the federal proposed AI Labeling Act. But I also added to SB 942 the “AI Detection” concept not found in the AI Labeling Act (I also wrote about this in my book Containing Big Tech, although not saying it is an original idea, but something that had not been in previous proposed legislative bills as far as I know). Here is how the Judiciary Committee describes this capability:

Covered providers must create and make available AI detection tools that allow users to assess whether content has been created or altered by their GenAI system. By mandating these tools, this bill empowers consumers to identify AI-generated content, thereby reducing the spread of misinformation and enhancing accountability. The tools can be uniquely created to be tailored within the providers’ system but must meet the following criteria:

 The AI detection tool shall be publicly accessible and available via a uniform resource locator (URL) on the covered provider’s internet website and through its mobile application, as applicable.

 The AI detection tool shall allow a person to upload content or a URL.

 The AI detection tool shall support an application programming interface (API) that allows a person to invoke the AI detection tool without visiting the covered provider’s website.

 The AI detection tool shall allow a person to provide feedback if the person believes the AI detection tool is not properly identifying content that was created by the provider.

In theory, a person who views a video circulating on social media conveying President Joe Biden telling voters not to vote in the primary election could use a provider’s AI detection tool to upload and analyze the video. By examining the embedded machine-readable disclosures, the user could identify the provider's name, the GenAI system used, and the creation date, concluding that the video was produced by AI. This process would reveal that the video is not genuine, thus, in theory, helping to prevent the spread of misinformation.

I am pleased to hear that other state and federal bills are now considering adding this idea to their proposed AI transparency and labeling legislation. Please note that this AI Detection Tool only applies to detecting a content provider’s own AI-generated content.

SB 942 (Becker) versus AB 321 (Wicks)

As noted in my opening paragraph, there are two comparable bills on AI transparency that are active: SB 942 (Becker) and AB 3211 (Wicks), so it should be interesting to see if the two sides combine forces or if one side wins over the other as each bill goes through the respective Appropriations Committee in both houses. Here is what the Privacy Committee bill analysis of SB 942 had to say:

Relationship to AB 3211 (Wicks, 2024). AB 3211 would require GenAI providers and the manufacturers of recording devices to include watermarking capabilities in the systems and devices they make available in California, and require social media platforms to label content with provenance info extracted from uploaded content. This bill overlaps with AB 3211 nearly entirely. Where this bill only seeks to address “step 1” in the three-step content authentication plan outlined above, AB 3211 seeks to implement the full plan. Where this bill lacks a particular mechanism for provenance disclosure, AB 3211 specifically requires watermarking. And where this bill’s philosophy is to set a “floor” that GenAI providers are able to innovate around and above, AB 3211 is far more prescriptive in its requirements.

Let me see how the initial conversations go between the various sides before weighing in. However, the key is that the fact that both bills have moved this far in the legislative process with overwhelming support clearly means there is momentum behind the concept.

Random Photos

Tom testifying on SB 942

Tom congratulated Senator Becker on a positive response to our testimony on SB 942 in the Assembly Judiciary Committee.

My photo of the side of the California State Capitol as I pulled out from the parking garage. It got up to 110 degrees! Yikes.

Previous
Previous

AI’s Impact on the Economy and Employment

Next
Next

APRA Draft 2 Still Falls Far Short of California’s Data Broker Law