Home

The 12th Open Letter to Leaders of Mankind

To:

Leaders of all countries, the Secretary-General of the United Nations, world-top scientists and scholars, renowned entrepreneurs and prominent media figures.

As you read this letter, generative AIs like ChatGPT and DeepSeek are sweeping across the globe with millions of interactions per second—they can write articles, devise plans, and even simulate human thought. Yet, as we marvel at the convenience of generative AI, do we realize that this "intelligent key" has quietly unlocked a new Pandora's box of uncontrolled technology?

For over forty years, I have consistently sounded the alarm for human survival: from the proliferation of nuclear weapons to the risks of gene editing, from early AI ethics debates to today's explosion of generative AI. Behind every leap in technology lies the fatal trap of "irrational development". In my previous eleven open letters, I urged you to confront the crisis that "the unchecked development of technology will soon lead to human extinction" and called for using the governing power of humanity's Great Unification to restrain high-risk technologies. Today, the speed of proliferation and potential risks of generative AI far surpass those of any previous technology. If we do not act now, we may miss the final window to safeguard human survival.

You ought to understand that the risks of generative AI are far from simple issues like "algorithmic bias" or "job displacement". Its "self-learning capability" is breaching human-defined boundaries: research shows that advanced AI models can independently deduce principles of military technology and even generate misleading information to manipulate public opinion. More alarmingly, over 20 countries worldwide are secretly advancing research on "AI weaponization". If the uncontrolled technological race among nations continues, within a century, autonomously decision-making AI weapons and maliciously exploited generative AI could become "invisible blades" that destroy humanity. This is not alarmism—just as I warned of the overall risks of technological development in my first open letter in 2007 and called for guarding against AI self-awareness in 2015, the risks of generative AI today appear particularly concrete and real.

In my past letters, I have always emphasized that humanity needs technology, but it must be "technology with reverence". Without a globally unified regulatory framework, generative AI will ultimately become a weapon for a few nations vying for hegemony and a tool for a few corporations chasing profits. Further loss of control will inevitably endanger the holistic survival of humanity. The current state of governance, where "each country sets its own rules", is precisely the root cause of technology runaway—if one country bans high-risk AI research, others accelerate it; if one regulates data usage, cross-border data flows remain untraceable. This "prisoner's dilemma" mirrors the overall technological risks I have warned about for decades, akin to the once-major concern of nuclear proliferation. Yet, the proliferation speed of generative AI is hundreds of times faster than that of nuclear weapons, and its concealment is thousandfold stronger.

Therefore, today, I once again plead with you in the name of human survival:

First, under the leadership of the United Nations, immediately initiate negotiations for a global convention on regulating generative AI. Bring together technology experts, ethicists, and representatives from all nations to explicitly prohibit high-risk applications such as AI autonomous decision-making weapons and AI-driven large-scale public opinion manipulation.

Second, integrate generative AI regulation into the core agenda of the United Nations. Establish a global AI safety testing center under UN auspices to conduct unified safety certifications for AI products from all countries, preventing risk spillover caused by regulatory loopholes.

Third, while the above two points are currently urgent, they are, in my view, only temporary measures. The fundamental way to avoid a technological extinction crisis is to achieve the Great Unification of all humanity, using the power of a world regime to control science and technology. Therefore, we must accelerate the consensus on the concept of "the Great Unification of Humanity". I implore you to place "the holistic survival of humanity" above national interests and above corporate interests, ensuring that technology truly serves humanity rather than destroying it.

Leaders, for over forty years, I have never retreated in the face of doubt, because I deeply understand: there is no "room for trial and error" for the holistic survival of humanity. The heated discussion ignited by generative AI is precisely an opportunity to awaken global consensus—it makes ordinary people realize that technological risks are not distant science fiction but a reality that must be confronted now. In response to the previous eleven letters, Nobel laureates have voiced support, international organizations have shown concern, and corporations have proactively engaged in regulation. This proves that a consensus on "saving humanity" is coalescing.

As I write today, I have transformed from the youth of over forty years ago into an old man, yet I remain that persistent advocate striving for human survival. My sole desire is this: to make technology a ladder for human progress, not an abyss of extinction; to allow our descendants to live securely in an age of reason, not to struggle in the fear of runaway technology.

The holistic survival of humanity is paramount above all else! I earnestly request that you respond to this mission with action and safeguard the future with consensus—this is not only a responsibility to the present but also a solemn commitment to our very species.

Founder of Humanitas Ark (formerly the Save Human Action Organization)

Hu Jiaqi

December 10, 2025

Media Contact
Company Name: Humanitas Ark
Contact Person: Hu Jiaqi
Email: Send Email
City: Beijing
Country: China
Website: en.savinghuman.org

The 12th Open Letter to Leaders of Mankind | WAOW