Publication
Road to COP29: Our insights
The 28th Conference of the Parties on Climate Change (COP28) took place on November 30 - December 12 in Dubai.
Global | Publication | oktober 2024
In response to the rapid development of artificial intelligence (AI) technologies, the Cyberspace Administration of China (the CAC) recently issued two draft regulations for public consultation: Measures for Labelling Artificial Intelligence-Generated or Synthetic Content (the Draft AI Labelling Measures) and Cybersecurity technology—Labelling method for content generated by artificial intelligence (the Draft Labelling Method Standard). The Draft Labelling Method Standard is a mandatory national standard which serves as the supporting and implementing rule of the AI Labelling Measures. Both regulations aim to address deepfake-related risks and ensure the authenticity and credibility of publicly available information.
Here are the key points:
Our take
Aligning with China’s stated commitment to playing a leading role in global AI governance, once adopted, these regulations are designed to enhance the traceability and transparency of AI-generated content, reduce the spread of false or misleading information and better protect the rights of content creators and the general public. We expect that the Draft AI Labelling Measures and the Draft Labelling Method Standard will accelerate the development of AI detection technologies in China.
Players operating internationally who are involved in the creation and dissemination of AI generated content now face an increasingly complex web of AI labelling requirements with different application and requirements. As highlighted above, the Draft AI Labelling Measures apply to all internet information service providers targeting the public in Mainland China.
The EU AI Act includes measures applying from August 2026 that will require:
In California, meanwhile, a law was recently enacted (California SB 942), which takes effect from January 2026. It requires covered providers (any person who creates, codes, or otherwise produces a generative artificial intelligence system that has over 1,000,000 monthly visitors or users and is publicly accessible within the geographic boundaries of California) to (1) offer the user an option to include a manifest disclosure (i.e., visible to the user) in content created or altered by the covered provider’s generative AI system, (2) include (not optional) a latent disclosure (i.e., invisible to the user) in content created by the covered provider’s generative AI system, and (3) make available an AI detection tool to users to permit the users to utilize the manifest and/or latent disclosures. The covered providers must also revoke the licence where they know that a third-party licensee has modified a generative AI system so that it is no longer capable of including the relevant disclosures.
In terms of how to comply, neither the EU nor California have published specific standards on how compliance can be achieved, although California’s law requires that the AI detection tool must support an API that allows the user to invoke the AI detection tool without visiting the covered provider’s site. In the EU, providers do not necessarily need to fulfil their obligations on marking in a machine-readable format via watermarking, though it is mentioned in the recitals as a possibility. Stakeholders have proposed the International Standard Content Code under ISO 24138 as a possible means to comply from an EU AI Act perspective, though it is not as yet on the work programme of CEN/CENELEC JTC21, the committee fulfilling the EU Commission’s AI Act standardisation request. At this stage, it is not yet clear how far regulatory expectations globally will align with the Draft AI Labelling Measures.
Subscribe and stay up to date with the latest legal news, information and events . . .
© Norton Rose Fulbright LLP 2023