
A group photo of the AI governance roundtable dialogue in Beijing, November 9, 2025. /Peking University
A group of leading experts has called for stronger global cooperation on artificial intelligence (AI) governance, highlighting a balance between security and innovation as AI technologies accelerate.
Recently, a roundtable on AI governance convened at Peking University (PKU) in Beijing, bringing together scholars and industry leaders from China and abroad to discuss the challenges and opportunities posed by rapid AI development.
In his opening remarks, Wang Dong, associate dean of the Office of Humanities and Social Sciences at PKU, said AI is reshaping industries, societies and the global landscape at an unprecedented speed.
He stressed that AI governance is a defining issue in the current era, and highlighting the need to strike a balance between ensuring security and fostering innovation through collaboration between academia, policymakers and industry.
Yang Xiaolei, deputy director of the Artificial Intelligence Research Institute at PKU, noted that AI, as the core driving force of a new round of technological revolution, brings both tremendous opportunities and complex risks. He emphasized that global collaboration is essential to unlocking AI's potential while ensuring its development remains "safe, reliable and fair."
Jia Qingguo, director of the Institute for Global Cooperation and Understanding at PKU, said although AI is greatly improving productivity and advancing human development, it also present serious challenges, including ethical dilemmas and the risk of losing control. He underscored the importance of international cooperation to address these challenges.
The participants conducted in-depth discussions on topics including the similarities and differences of AI governance models in China and other countries, and balancing the safety baseline and innovative vitality of AI.
Zhang Linghan, professor and dean of the Artificial Intelligence Law Research Institute at China University of Political Science and Law, analyzed the commonalities and differences in governance models across nations, pointing out that most countries adopt a mix of governance tools, with different focuses and approaches. She stressed that governance models must align with each country's institutional capacity.
Senior research manager at Concordia AI Jason Zhou provided insights into China's AI safety governance, discussing current policies, regulations and technical standards. He compared China's approach with that of the European Union and the U.S., noting a growing global consensus on the security risks posed by AI and calling for strengthened international cooperation on best practices and datasets for security testing.
Experts also delved into the security risks, innovation opportunities and regulatory challenges about AI technology.
Francis Steen, associate professor at the University of California, Los Angeles, analyzed the societal limitations in understanding AI risks, pointing out that AI is an extension and reflection of human intention, data and behavior. He emphasized the central role of human in the development of technologies and future governance must be based on reaffirming human values and responsibilities.
Dmitry Yudin, head of Intelligent Transport Laboratory of Moscow Institute of Physics and Technology, said the most pressing safety concerns is in autonomous driving and intelligent robotics, outlining Russia's efforts to promote AI innovation and its recent introduction of AI ethics guidelines. He suggested international demonstration projects and training programs to help regulators better understand and embrace cutting-edge technologies.
The roundtable provided a platform for diverse global perspectives and practices on AI governance, offering valuable insights for seeking cooperation and balancing security and innovation in a complex geopolitical environment.
Source:CGTN