In response to the rapid development of artificial intelligence (AI) technologies, the Cyberspace Administration of China (the CAC) recently issued two draft regulations for public consultation: Measures for Labelling Artificial Intelligence-Generated or Synthetic Content (the Draft AI Labelling Measures) and Cybersecurity technology—Labelling method for content generated by artificial intelligence (the Draft Labelling Method Standard). The Draft Labelling Method Standard is a mandatory national standard which serves as the supporting and implementing rule of the AI Labelling Measures. Both regulations aim to address deepfake-related risks and ensure the authenticity and credibility of publicly available information.
Here are the key points:
- Applicable subjects and jurisdictional scope: The Draft AI Labelling Measures primarily apply to internet information service providers (the Service Providers), including AI content generation and online content dissemination service providers. Entities and institutions that do not provide services to the public in Mainland China are excluded from their scope. However, service providers outside China targeting the public in Mainland China are likely to be captured under these regulations.
- Service Provider’s labelling obligations: The Draft AI Labelling Measures define two types of labelling: explicit labels and implicit labels. Explicit labels are perceptible to users, displayed as text, sound, images or other forms. Implicit labels, on the other hand, are embedded in the metadata of AI-generated content and can only be extracted through technical means, hence remaining invisible to users. The Draft Labelling Measures outline scenarios where either explicit labels or implicit labels are required. The Draft Labelling Method Standard provides detailed and context-specific guidelines for implementing both types of labels, which the Service Providers must follow.
- Additional obligations for Service Providers: In addition to labelling, the Draft AI Labelling Measures impose further obligations on Service Providers. These include, amongst others, incorporating relevant provisions in respect of labelling into user service agreements and submitting label-related materials when undergoing algorithms filing or security assessment procedures.
- Distinguished labelling scenarios: The Draft AI Labelling Measures differentiate between content that is definitely, likely or suspected to be AI-generated. Different actions are required based on these distinctions. For example, if the file metadata contains an implicit label, the online information dissemination service provider must add a prominent label to inform users that the content is AI-generated.
- Obligations of other regulated parties: App stores and users are also subject to labelling obligations. App stores must verify whether Service Providers have enabled the labelling function before allowing an app to be made publicly available. Users uploading AI-generated content to online platforms are required to actively declare and use the labelling function provided by the platform.
- Penalties for violations: While the Draft AI Labelling Measures do not specify penalties for non-compliance, enforcement is left to the CAC, which may refer to relevant laws, administrative regulations and departmental rules to impose penalties.
Our take
Aligning with China’s stated commitment to playing a leading role in global AI governance, once adopted, these regulations are designed to enhance the traceability and transparency of AI-generated content, reduce the spread of false or misleading information and better protect the rights of content creators and the general public. We expect that the Draft AI Labelling Measures and the Draft Labelling Method Standard will accelerate the development of AI detection technologies in China.
Players operating internationally who are involved in the creation and dissemination of AI generated content now face an increasingly complex web of AI labelling requirements with different application and requirements. As highlighted above, the Draft AI Labelling Measures apply to all internet information service providers targeting the public in Mainland China.
The EU AI Act includes measures applying from August 2026 that will require:
- Providers using AI systems generating synthetic audio, image, video or text content to ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated.
- Deployers of AI systems generating deep fakes or text which is published to inform the public on matters of public interest must also disclose that the content has been artificially generated or manipulated.
In California, meanwhile, a law was recently enacted (California SB 942), which takes effect from January 2026. It requires covered providers (any person who creates, codes, or otherwise produces a generative artificial intelligence system that has over 1,000,000 monthly visitors or users and is publicly accessible within the geographic boundaries of California) to (1) offer the user an option to include a manifest disclosure (i.e., visible to the user) in content created or altered by the covered provider’s generative AI system, (2) include (not optional) a latent disclosure (i.e., invisible to the user) in content created by the covered provider’s generative AI system, and (3) make available an AI detection tool to users to permit the users to utilize the manifest and/or latent disclosures. The covered providers must also revoke the licence where they know that a third-party licensee has modified a generative AI system so that it is no longer capable of including the relevant disclosures.
In terms of how to comply, neither the EU nor California have published specific standards on how compliance can be achieved, although California’s law requires that the AI detection tool must support an API that allows the user to invoke the AI detection tool without visiting the covered provider’s site. In the EU, providers do not necessarily need to fulfil their obligations on marking in a machine-readable format via watermarking, though it is mentioned in the recitals as a possibility. Stakeholders have proposed the International Standard Content Code under ISO 24138 as a possible means to comply from an EU AI Act perspective, though it is not as yet on the work programme of CEN/CENELEC JTC21, the committee fulfilling the EU Commission’s AI Act standardisation request. At this stage, it is not yet clear how far regulatory expectations globally will align with the Draft AI Labelling Measures.