As China’s latest regulatory measures take effect on September 1, 2025, social media giants across the country are now required to prominently flag all content generated by artificial intelligence, marking a significant expansion of the government’s control over digital information.
The “Measures for Labeling AI-Generated Content,” released in March by the Cyberspace Administration of China (CAC) and three other government agencies, mandates that all platforms must implement both visible labels and embedded metadata for synthetic content.
Major platforms including WeChat, Douyin, Weibo, and RedNote have already implemented the necessary changes to comply with the new regulations. The rules specifically target text, images, audio, video, and virtual scenes created or manipulated by AI technologies, with particular concern focused on deepfakes and other potentially misleading synthetic media.
Content that could “mislead or confuse users” must carry explicit markers at the beginning, middle, or end of the material.
The regulatory framework establishes a three-tiered review system for platforms to verify and flag AI-generated content. First, platforms must detect explicit metadata markers; second, they must use algorithmic inference to mark “suspected” AI content; and third, they must issue risk notices for content whose origin cannot be definitively determined.
Downloadable AI-generated files must contain embedded labels within the file itself to maintain identification integrity after distribution.
These measures align with China’s broader “Qinglang” campaign, which aims to create a “clear and bright” online environment by curbing misinformation, fraud, and security threats.
Technical standards supporting the implementation were published nearly two years earlier by the National Information Security Standardisation Technical Committee, indicating the government’s long-term planning for this regulatory expansion.
Industry observers note that the regulations represent one of the world’s most extensive approaches to AI content labeling, requiring significant technical investments from platforms while raising questions about how effectively users will distinguish between human and AI-created materials in their daily social media consumption. Similar to how music creators register with performance rights organizations to protect their intellectual property, platform users must navigate complex systems to identify and verify content origins. Musicians and content creators face similar challenges when navigating sync deals for their work in visual media across different regulatory environments.
The CAC has also released draft GenAI Security Incident Response Guidelines that categorize AI security incidents into ten types and four severity levels, further demonstrating China’s comprehensive approach to AI governance.
Many content creators are now required to voluntarily declare AI-generated content on these platforms, adding another layer of responsibility to ensure compliance with the new regulations.