日批在线视频_内射毛片内射国产夫妻_亚洲三级小视频_在线观看亚洲大片短视频_女性向h片资源在线观看_亚洲最大网

Global EditionASIA 中文雙語Fran?ais
Opinion
Home / Opinion / From the Press

Superintelligence development: Better slow than sorry

chinadaily.com.cn | Updated: 2026-02-12 18:12
Share
Share - WeChat
Jin Ding/China Daily

Editor's note: As tech giants and research institutes across the world are racing to develop artificial general intelligence and even aiming to usher in the future of superintelligence, an open letter issued months ago and calling for temporary "prohibition" on the development of superintelligence has garnered support from some scientists, including artificial intelligence pioneers. Zeng Yi, a researcher at the Institute of Automation of the Chinese Academy of Sciences, spoke to Peng Fei, a commentator of People's Daily, about the impact superintelligence could have and why safety should be the top priority. Below are excerpts of the interview. The views don't necessarily represent those of China Daily.

Artificial general intelligence generally refers to an information processing tool with high generalization capability, which approaches or reaches the level of human intelligence and boasts broad application prospects.

Artificial superintelligence, by contrast, refers to intelligence that surpasses human intelligence in all aspects and is regarded as a life-like entity. This means it would develop autonomous consciousness, and many of its thoughts and actions would likely be incomprehensible to humans, and therefore less controllable.

It is hoped that superintelligence will be "super-altruistic", but what if it turns out to be "super-malevolent"? It is this sense of uncertainty that causes concern.

Superintelligence cannot be simply compared to any technological tool in history, as the possibility of it possessing independent cognition and surpassing human intelligence presents an unprecedented challenge. If the goals of superintelligence are inconsistent with human values, even minor deviations could be amplified by its capabilities and lead to catastrophic consequences.

Safety must be the first priority for the development of superintelligence. That is to say, it should be embedded in its "genes". Safety guardrails should not be lowered over concerns that they may affect the model's capabilities. Comprehensive assessment is needed to identify as many potential hazards as possible and strengthen the model's safety.

Typical security issues such as privacy leakage and disinformation can be effectively addressed and short-term risks properly handled through the technical cycle of "attack-defense-evaluation" and the continuous upgrading of the model.

But in the long run, the real challenge lies in aligning artificial superintelligence with human expectations. Reinforcement learning from human feedback, the current approach that embeds human values into AI through human-machine interaction, will likely prove ineffective for superintelligence.

Given that superintelligence may develop self-awareness, an ideal vision is to make it develop moral intuition, empathy and altruism on its own, rather than merely relying on values and rules imposed from the outside. Risks can only be minimized when AI evolves from being ethically compliant to having morality.

Humanity needs to prevent the development of AI from turning into an "arms race". The creation of the world's first superintelligence might not require international cooperation, but ensuring that superintelligence is safe and reliable for all humanity will require global collaboration.

The world needs an efficient and effective international institution to coordinate the governance of AI and ensure its safety. In August 2025, the United Nations General Assembly decided to establish the Independent International Scientific Panel on AI and the Global Dialogue on AI Governance to promote sustainable development and bridge the digital divide. Explorations in this regard should be further deepened and continued.

Those countries with advanced AI technologies bear a greater responsibility and obligation to prevent the reckless development of superintelligence in the absence of rules.

China advocates building a community with a shared future for humanity and a community with a shared future in cyberspace. Emphasizing the coordination of development and safety, the country has put forward the Global AI Governance Initiative. These initiatives deserve global promotion and implementation in relation to AI as well.

It is better to slow down a bit to lay a solid foundation for safety, than to seek quick success and instant benefits that might lead human society into an irreversible and perilous situation.

Most Viewed in 24 Hours
Top
BACK TO THE TOP
English
Copyright 1994 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349
FOLLOW US
主站蜘蛛池模板: 精品视频久久久 | 久久精品区 | 激情欧美在线 | 在线成人小视频 | 黄色小视频在线免费观看 | 欧美日韩高清一区二区 | 国产视频精品在线 | 日韩亚洲国产欧美 | 四库影院在线观看 | 免费观看毛片 | 亚洲午夜免费 | 一级成人毛片 | 高清免费毛片 | 四虎com| 亚洲网站免费观看 | 精品视频www| 午夜久 | 国产99自拍| www日韩av| 日本色中色| 欧美午夜精品一区二区三区 | 夜夜狠狠 | 国产极品粉嫩 | 免费观看的毛片 | 亚洲激情自拍偷拍 | 亚洲精品国产精品国 | 91变态视频 | 爆操欧美美女 | 97福利社| 日韩黄色免费观看 | 午夜影院体验区 | 日韩av在线免费播放 | 亚洲天堂伊人 | 国产人成在线 | 一级片免费观看 | 金8天国av | 亚洲精品一二三 | 91麻豆视频网站 | 久久久久久中文字幕 | 久久午夜夜伦鲁鲁片 | 久久精品国产亚洲 |