Time to specify AI service management norms

Editor's note: The Cyberspace Administration of China is seeking the public's views, from April 11 to May 10, on draft measures aimed at managing emerging foreign and domestic artificial intelligence (AI) services. These requirements cover a wide range of issues that are frequently debated in relation to the governance of generative AI, such as data protection and security. Three experts share their views on the issue with China Daily.


The era of generative artificial intelligence is here. ChatGPT attracted about 100 million users in just two months, becoming a phenomenon in no time and prompting major internet companies to accelerate the development of their own generative AI service platforms.

While Google has begun public testing for its "Bard" chatbot, Anthropic, a company founded by a former vice-president of OpenAI that developed ChatGPT, has released "Claude+". In China, Baidu has launched "ERNIE Bot", Alibaba has released "Tongyi Qianwen", while Tencent, 360, Huawei and other internet giants have announced their layout in generative AI services.

Generative AI technology not only generates high-quality and diverse content such as texts, images and music, but also can be used in automatic programming, machine learning, data generation, game development, virtual reality and other fields. It has already started changing our way of life and work, creating new opportunities for the development of different industries. In terms of technology, generative AI is driving the development in fields such as natural language processing, computer vision and audio processing, offering new methods and ideas for solving complex scientific problems.

On the economic front, generative AI has provided new growth points and business models for various industries such as media, education, entertainment and advertising, helping boost economic growth. In terms of society, generative AI has provided people with new communication methods and entertainment sources including chatting, gaming and reading, enriching people's lives.

However, as generative AI technology continues to advance and be applied to different fields, it poses significant potential risks and challenges. First, it can deepen the problem of information distortion and abuse. A recent Europol report, for example, said that the possibility of using tools like ChatGPT for crime is increasing. Such tools can be used for phishing and online fraud, as well as impersonating the speaking style of specific individuals or groups, prompting potential victims to trust criminals.

Second, there is the serious problem of algorithmic bias and prejudice. Unbalanced data sets, biased feature selection, or subjective human labeling can all lead to algorithmic bias, promoting social and economic inequality and influencing political decision-making.

Deep fakes, too, are a serious problem. Generative AI technology can create extremely realistic fake texts, images, audios and videos, fabricate false news, mislead the public and trigger political unrest.

And third, infringement on privacy is a major issue that requires greater attention. Generative AI systems could collect, store and use personal information, which could be improperly used or leaked, thereby infringing on personal privacy. More importantly, generative AI may affect human creativity and thinking ability, making people overly reliant on machine-generated content, possibly changing social relationships and values, reducing human communication (including emotional communication) and interaction, causing people to lose their subjective judgment and critical thinking.

We need to fully recognize these risks and challenges, and strive to establish targeted and comprehensive management service standards for generative AI at four levels, while balancing technological development and social interests, in order to ensure generative AI technology is safe and ethical, and serves humanity.

At the technical level, we need to strengthen the development of trustworthy AI technology, ensure it's accurate and interpretable and can be controlled, and prevent it from generating erroneous and harmful content. We also need to strengthen the security protection and improve the tools to detect fake content produced by generative AI, and prevent it from being tampered with or abused.

At the legal level, it is necessary to formulate and improve laws, regulations and standard specifications related to generative AI, make clear the scope of its use and responsibilities, protect legal rights and interests of entities, and punish any illegal acts. Also, an effective judicial relief and dispute resolution mechanism needs to be established to promptly handle disputes and controversies caused by generative AI.

At the ethical level, it is important to establish and abide by ethics and codes of conduct related to generative AI, respect human dignity, autonomy and diversity, and maintain social fairness, justice, and stability. It is also necessary to spread ethical education on generative AI and raise the ethical awareness and literacy of its users and audiences and readers.

At the social level, there is a need to promote communication and cooperation between generative AI and various sectors of society, enhance mutual understanding and trust, and explore the ways of utilizing its potential and creating value. Attention should also be paid to the impact and challenges of generative AI on socioeconomic, cultural and educational fields, and proactively respond to the challenges and opportunities it gives rise to.

The author is a researcher at the Institutes of Science and Development, Chinese Academy of Sciences.

The views don't necessarily represent those of China Daily.