Xinhua News Agency, Beijing, March 16th Title: Strengthening full-process management to guide technology to be good—Interpretation of the “Measures for Generating Synthetic Content Identification of Artificial Intelligence”
Xinhua News Agency reporter Wang Sibei
Recently, the State Internet Information Office, the Ministry of Industry and Information Technology, the Ministry of Public Security, and the State Administration of Radio and Television jointly issued the “Measures for the Generation of Synthetic Contents of Artificial Intelligence”. The Measures focus on the key points of artificial intelligence’s “generating synthetic content logos”, remind users to identify false information through logos, clarify the identification responsibilities and obligations of relevant service entities, and standardize the identification behavior of each link of content production and dissemination. It will come into effect on September 1, 2025.
Experts believe that the method is an important measure for my country to promote security governance in the field of artificial intelligence, promote the standardized and healthy development of industries, and guide technology to be good, marking a key step in building a safe and trustworthy ecosystem in the field of generative artificial intelligence.
In recent years, new technologies such as generative artificial intelligence and deep synthesis have developed rapidly, providing convenient tools for generating synthetic text, pictures, audio, video and other information.
“However, the increasingly realistic content of artificial intelligence generates synthetic content has also given rise to new security risks such as false information dissemination, identity information impersonation, and malicious content generation, and weakens the public’s trust in content dissemination on the Internet.” Chen Chun, an academician of the Chinese Academy of Engineering and a professor at Zhejiang University, believes that in the face of the global problem of artificial intelligence security governance, the four departments have issued the “Measures for the Generation of Synthetic Content Identification of Artificial Intelligence” in a timely manner, and the supporting mandatory national standard “Cybersecurity Technology Artificial Intelligence Generation Synthetic Content Identification Method” was also released at the same time, becoming a key means to ensure the security and order of the network ecology in the era of artificial intelligence.
A relevant official from the Cyberspace Administration of China said that the measures focus on solving problems such as “what is generated”, “who generated”, and “where it is generated”, and promote the full process security management from generation to communication, and strive to create trustworthy artificial intelligence technology.
——For service providers, the method clearly states that explicit identification should be added to the generated synthetic content such as text, audio, pictures, video, virtual scenes, etc. When providing functions such as downloading, copying, and exporting of synthetic content, it should be ensured that the file contains explicit identifications that meet the requirements; implicit identification should be added to the file metadata that generates synthetic content.
——For Internet application distribution platforms, the method proposes that when the application is launched or reviewed online, the Internet application service provider should be required to explain whether to provide artificial intelligence generation and synthesis services, and verify the relevant materials for the generation of synthetic content identification.
——For users, if the measures require the use of network information content dissemination services to publish and generate synthetic content, they should actively declare and use the identification function provided by the service provider to identify.
In addition, no organization or individual may maliciously delete, tamper, forge or conceal the generated synthetic content logos stipulated in the Measures, shall not provide tools or services for others to commit the above malicious behaviors, and shall not damage the legitimate rights and interests of others through improper identification.
“The “Regulations on the Recommendation Management of Internet Information Service Algorithm”, “Regulations on the In-depth Synthesis Management of Internet Information Services”, and “Interim Measures on the Management of Generative Artificial Intelligence Services” put forward relevant requirements for identification, and the measures further refine the specific implementation specifications of identification.” Zhao Jingwu, associate professor at the School of Law of Beijing University of Aeronautics and Astronautics, believes that the measures clarify the responsibilities and obligations of each subject to the production and dissemination of synthetic content, and users can clearly identify the content generated by artificial intelligence, reduce the harm of the abuse of artificial intelligence generation and synthesis technology, and prevent risks such as using the generated synthetic content to spread false information and commit telecommunications fraud.
On the one hand, clear content identification helps to improve information transparency, enhance users’ right to know and choose, and cultivate the public’s rational understanding of artificial intelligence technology. On the other hand, for service providers and communication platforms, the content identification system is both a responsibility and an opportunity. While encouraging enterprises to pursue technological capabilities, it will also increase attention to the social impact of products and promote the entire industry to develop in a more standardized and healthy direction.
Good laws and good governance focus on implementation. Experts believe that the implementation of the measures requires coordination and cooperation among all parties to promote steady and long-term development of the logo.
Jin Bo, deputy director of the Third Research Institute of the Ministry of Public Security, said that with the gradual organic connection between identification management and algorithm filing, security assessment and other mechanisms, the generation of synthetic content identification compliance may become a key area of focus for relevant departments to carry out artificial intelligence supervision and inspection and special actions. In this process, how to balance development and security, innovation and responsibility, improve the professionalization, refinement and intelligence of law enforcement, and thus cultivate a safe, open and fair artificial intelligence industry ecological environment, needs in-depth exploration.
“In addition, the ‘human factor’ should be integrated into the entire process of artificial intelligence identification management.” Jin Bo said that it is necessary to focus on improving the public’s ability to evaluate the authenticity of information content and traceability of source, actively cultivate the public’s artificial intelligence literacy, and ensure that the results of artificial intelligence technology are universally shared.
[Editor in charge: Cao Jing]