A Look at Chinese Intelligent Warfare: Reflections on Warfare Brought by AGI

檢視中國智能戰:對通用人工智慧帶來的戰爭的反思

現代英語:

AGI and its implications for warfare

  Editor’s Note

  Technology and war are inextricably intertwined. While technological innovation continuously alters the face of warfare, it hasn’t changed the violent nature and coercive purpose of war. In recent years, with the rapid development and application of artificial intelligence (AI) technology, the debate about its impact on warfare has never ceased. Compared to artificial intelligence (AI), artificial general intelligence (AGI) possesses a higher level of intelligence and is considered a form of intelligence comparable to human intelligence. How will the emergence of AGI affect warfare? Will it change the violent and coercive nature of war? This article will explore this question with a series of reflections.

  Is AGI merely an enabling technology?

  Many believe that while large-scale models and generative artificial intelligence demonstrate the powerful military application potential of AGI, they are ultimately just enabling technologies. They can only enhance and optimize weapons and equipment, making existing equipment smarter and improving combat efficiency, but they are unlikely to bring about a true military revolution. Just as “cyber warfare weapons” were once highly anticipated by many countries when they first appeared, but now it seems that these expectations were somewhat exaggerated.

  The disruptive nature of AGI is entirely different. It brings profound changes to the battlefield with reaction speeds and knowledge far exceeding those of humans. More importantly, it fosters rapid technological advancement, resulting in massive disruptive outcomes. On the future battlefield, autonomous weapons will be endowed with advanced intelligence by AGI, their performance will be universally enhanced, and they will become “strong in offense and difficult in defense” due to their speed and swarm advantages. At that time, the highly intelligent autonomous weapons predicted by some scientists will become a reality, with AGI playing a crucial role. Currently, the military applications of artificial intelligence include autonomous weapons, intelligence analysis, intelligent decision-making, intelligent training, and intelligent support, applications that are difficult to summarize simply as “empowerment.” Moreover, AGI develops rapidly, with short iteration cycles, and is constantly evolving. Future warfare requires prioritizing AGI and paying close attention to its potential changes.

  Will AGI make wars disappear?

  Historian Jeffrey Blainey argues that “wars always occur because of misjudgments of each other’s strength or will,” and that with the application of AGI in the military field, misjudgments will become increasingly rare. Therefore, some scholars speculate that wars will decrease or even disappear. Indeed, relying on AGI can significantly reduce misjudgments, but even so, it’s impossible to eliminate all uncertainty, as uncertainty is a defining characteristic of war. Moreover, not all wars arise from misjudgments, and the inherent unpredictability and unexplainability of AGI, along with the lack of experience in using AGI, will introduce new uncertainties, plunging people into an even deeper “fog of artificial intelligence.”

  AGI algorithms also present rational challenges. Some scholars believe that AGI’s ability to mine and accurately predict crucial intelligence has a dual impact. In practice, AGI does indeed make fewer mistakes than humans, improving intelligence accuracy and reducing misjudgments; however, it can sometimes lead to overconfidence and encourage reckless actions. The offensive advantage brought by AGI results in the optimal defensive strategy being “preemptive strike,” disrupting the balance between offense and defense, triggering a new security dilemma, and ultimately increasing the risk of war.

  AGI (Automatic Generative Technology) is highly versatile and easily integrated into weaponry. Unlike nuclear, biological, and chemical technologies, it has a low barrier to entry and is particularly prone to proliferation. Due to technological gaps between countries, immature AGI weapons could potentially be deployed on the battlefield, posing significant risks. For example, the application of drones in recent local wars has spurred many small and medium-sized countries to begin large-scale drone procurement. The low-cost equipment and technologies offered by AGI could very well trigger a new arms race.

  Will AGI be the ultimate deterrent?

  Deterrence is maintaining a capability to intimidate an adversary from taking actions that exceed one’s own interests. Ultimate deterrence is when it becomes so powerful as to be unusable, such as nuclear deterrence that ensures mutual destruction. But ultimately, however, it is “human nature” that determines the outcome—a crucial element that will never be absent from war.

  Without the considerations of “humanity,” will AGI become a formidable deterrent? AGI is fast but lacks empathy; its execution is resolute, severely compressing the space for strategic maneuvering. AGI is a key factor on the future battlefield, but due to a lack of practical experience, accurate assessment is difficult, easily leading to overestimation of the opponent’s capabilities. Furthermore, regarding autonomous weapon control, whether to have humans on-site, providing full supervision, or to have humans off-site, completely relinquishing control, undoubtedly requires careful consideration. Can the firing control of intelligent weapons be handed over to AGI? If not, the deterrent effect will be greatly diminished; if so, can human life and death truly be decided by machines unrelated to them? Research at Cornell University shows that large-scale wargaming models frequently escalate wars with a “sudden nuclear attack,” even when in a neutral state.

  Perhaps one day in the future, AGI will surpass human capabilities, rendering us unable to regulate and control it. Jeffrey Hinton, who coined the term “deep learning,” says he has never seen a case where something with a higher level of intelligence was controlled by something with a lower level of intelligence. Some research teams believe that humans may not be able to supervise super-intelligent AI. Faced with powerful AGI in the future, will we truly be able to control them? This is a question worth pondering.

  Will AGI change the nature of warfare?

  With the widespread use of AGI, will battlefields filled with violence and bloodshed disappear? Some argue that AI warfare far exceeds human capabilities, potentially pushing humanity out of the fray. When AI transforms warfare into a conflict entirely between autonomous robots, will it still be a “violent and bloody war”? When adversaries with unequal capabilities clash, the weaker party may not even have a chance to act. Can war be ended before it even begins through war games? Will AGI fundamentally alter the nature of warfare? Is a “war” without human intervention still a war?

  Yuval Noah Harari, author of *Sapiens: A Brief History of Humankind*, states that all human behavior is mediated by language and influences our history. The Large Language Model (AGI) is a typical example of AGI, differing from other inventions in its ability to create entirely new ideas and cultures. “Artificial intelligence that can tell stories will change the course of human history.” When AGI gains control over language, the entire system of civilization built by humanity could be overturned, without even requiring AGI to develop consciousness. Like Plato’s Allegory of the Cave, will humanity worship AGI as a new “god”?

  AGI (Artificial Intelligence Generative Devices) establishes a close relationship with humans through human language and alters their perceptions, making them difficult to discern and identify. This poses a risk that the will to fight could be controlled by those with ulterior motives. Harari stated that computers don’t need to deploy killer robots; if necessary, they will allow humans to pull the trigger themselves. AGI precisely manufactures and refines situational information, controlling battlefield perception through deepfakes. This can be achieved through drones faking battlefield situations and pre-war propaganda, as evidenced in recent local wars. The cost of war would thus decrease significantly, leading to new forms of warfare. Would small and weak nations still have a chance? Can the will to fight be changed without bloodshed? Is “force” no longer a necessary condition for the definition of war?

  The form of war may change, but its essence remains. Regardless of how “bloody” war is, it will still force the enemy to submit to its will and inflict significant “collateral damage,” only the methods of confrontation may be entirely different. The essence of war lies in the deep-seated “human nature,” which is determined by culture, history, behavior, and values. It is difficult to completely replicate using any artificial intelligence technology. Therefore, we cannot outsource all ethical, political, and decision-making issues to artificial intelligence, nor can we expect it to automatically generate “human nature.” Artificial intelligence technology may be abused due to impulsive passions, so it must be under human control. Since artificial intelligence is trained by humans, it will never be without bias, so it cannot be completely free from human supervision. In the future, artificial intelligence can become a creative tool or partner, enhancing “tactical imagination,” but it must be “aligned” with human values. These issues require continuous reflection and understanding in practice.

  Will AGI revolutionize war theory?

  Most academic knowledge is expressed in natural language. A comprehensive language model, encompassing the vast body of human writing, can connect seemingly incompatible linguistic works with scientific research. For example, some have input classical works, and even works from philosophy, history, political science, and economics, into a comprehensive language model for analysis and reconstruction. They’ve found that it can comprehensively analyze all scholars’ viewpoints and also offer its own “insights,” without sacrificing originality. Therefore, some have suggested that AGI could also be used to re-analyze and interpret war theory, stimulating human innovation and driving significant evolution and reconstruction of war theory and its systems. Perhaps theoretically, this could indeed lead to some improvements and developments, but war science is not only theoretical but also practical, and practicality and realism are fundamentally beyond AGI’s capabilities. Can classical war theory truly be reinterpreted? If so, what is the significance of the theory?

  In short, AGI’s disruptive impact on the concept of warfare will far exceed “mechanization” and “informatization.” We must embrace AGI boldly, yet remain cautious. Understanding the concept prevents ignorance; in-depth research prevents falling behind; and strengthened oversight prevents oversight. How to cooperate with AGI and guard against adversaries’ AGI technological surprise attacks is our primary concern for the future. (Rong Ming, Hu Xiaofeng)

 Postscript

  Think ahead and envision the future with an open mind

  Futurist Roy Amara famously asserted that people tend to overestimate the short-term benefits of a technology while underestimating its long-term impact, a principle known as “Amara’s Law.” This law emphasizes the non-linear nature of technological development, meaning that the actual impact of technology often only becomes fully apparent over a longer timescale. It reflects the pulse and trends of technological development, and embodies humanity’s acceptance and aspirations towards technology.

  Currently, in the development of artificial intelligence from weak AI to strong AI, and from specialized AI to general AI, every time people think they have completed 90% of the process, looking back, they may have only completed less than 10%. The driving role of technological revolution in military revolution is becoming increasingly prominent, especially as high-tech, represented by AI, penetrates the military field in multiple ways, profoundly changing the mechanisms, elements, and methods of winning wars.

  In the foreseeable future, intelligent technologies such as AGI will continue to iterate, and the cross-evolution of intelligent technologies and their empowering applications in the military field will become increasingly diversified, perhaps even transcending the boundaries of humanity’s current understanding of warfare. The development of technology is unstoppable, and no one can halt it. Whoever can use keen insight and a clear mind to see the trends and future of technology, to recognize its potential and power, and to penetrate the “fog of war,” is more likely to seize the initiative and gain the upper hand.

  This reminds us that exploring the future forms of warfare requires a broader perspective and more nuanced thinking to get closer to the underestimated reality. Where is AGI headed? Where is intelligent warfare headed? These questions test human wisdom. (Ye Chaoyang)

現代國語:

通用人工智慧及其對戰爭的影響

編按

科技與戰爭密不可分。科技創新不斷改變戰爭的面貌,卻並未改變戰爭的暴力本質與脅迫目的。近年來,隨著人工智慧(AI)技術的快速發展和應用,關於其對戰爭影響的爭論從未停止。與人工智慧(AI)相比,通用人工智慧(AGI)擁有更高層次的智能,被認為是一種可與人類智能相媲美的智能形式。 AGI的出現將如何影響戰爭?它會改變戰爭的暴力和脅迫本質嗎?本文將透過一系列思考來探討這個問題。

AGI只是一種賦能技術嗎?

許多人認為,儘管大規模模型和生成式人工智慧展現了AGI強大的軍事應用潛力,但它們最終只是賦能技術。它們只能增強和優化武器裝備,使現有裝備更加智能,提高作戰效率,但不太可能帶來真正的軍事革命。正如「網路戰武器」最初出現時曾被許多國家寄予厚望,但現在看來,這些期望有些過高。

通用人工智慧(AGI)的顛覆性本質則截然不同。它以遠超人類的反應速度和知識水平,為戰場帶來深刻變化。更重要的是,它促進了技術的快速發展,從而產生巨大的顛覆性影響。在未來的戰場上,AGI將賦予自主武器先進的智能,使其性能全面提升,並憑藉其速度和集群優勢,成為「攻守難攻」的武器。屆時,一些科學家預測的高智慧自主武器將成為現實,而AGI將在其中扮演至關重要的角色。目前,人工智慧的軍事應用包括自主武器、情報分析、智慧決策、智慧訓練和智慧支援等,這些應用很難簡單地用「賦能」來概括。此外,通用人工智慧(AGI)發展迅速,迭代周期短,並且不斷演進。未來的戰爭需要優先考慮AGI,並密切關注其潛在的變化。

AGI會讓戰爭消失嗎?

歷史學家杰弗裡·布萊尼認為,“戰爭總是由於對彼此實力或意志的誤判而發生的”,而隨著AGI在軍事領域的應用,誤判將變得越來越少見。因此,一些學者推測戰爭將會減少甚至消失。的確,依賴AGI可以顯著減少誤判,但即便如此,也無法完全消除不確定性,因為不確定性是戰爭的本質特徵。此外,並非所有戰爭都源自於誤判,AGI固有的不可預測性和不可解釋性,以及缺乏使用AGI的經驗,將會帶來新的不確定性,使人們陷入更深的「人工智慧迷霧」。

通用人工智慧(AGI)演算法也帶來了理性方面的挑戰。一些學者認為,AGI挖掘和準確預測關鍵情報的能力具有雙重影響力。在實踐中,AGI確實比人類犯錯更少,提高了情報準確性並減少了誤判;然而,它有時會導致過度自信,並助長魯莽行動。 AGI帶來的進攻優勢使得最佳防禦策略成為“先發製人”,打破了攻防平衡,引發了新的安全困境,並最終增加了戰爭風險。

AGI(自動生成技術)用途廣泛,易於整合到武器系統中。與核武、生物武器和化學武器不同,AGI的進入門檻低,且極易擴散。由於各國之間存在技術差距,不成熟的AGI武器有可能部署到戰場上,造成重大風險。例如,無人機在近期局部戰爭中的應用促使許多中小國家開始大規模採購無人機。通用人工智慧(AGI)提供的低成本裝備和技術很可能引發一場新的軍備競賽。

通用人工智慧會成為終極威懾力量嗎?

威懾是指維持一種能力,使對手不敢採取超越自身利益的行動。終極威懾是指威懾力強大到無法使用,例如確保相互毀滅的核威懾。但最終,決定戰爭結果的是「人性」——這是戰爭中永遠不可或缺的關鍵因素。

如果忽略「人性」因素,通用人工智慧會成為強大的威懾力量嗎?通用人工智慧速度很快,但缺乏同理心。其執行果斷,嚴重壓縮了戰略迴旋空間。通用人工智慧(AGI)是未來戰場上的關鍵因素,但由於缺乏實戰經驗,準確評估其能力十分困難,容易導致高估對手實力。此外,關於自主武器控制,究竟是安排人員在現場進行全面監督,還是安排人員遠端操控,完全放權,無疑需要慎重考慮。智慧武器的發射控制權能否移交給AGI?如果不能,威懾效果將大大降低;如果可以,人類的生死真的能由與他們無關的機器來決定嗎?康乃爾大學的研究表明,即使在中立國,大規模兵棋推演模型也經常會透過「突然的核攻擊」來升級戰爭。

或許在未來的某一天,AGI的能力將超越人類,使我們無法對其進行監管和控制。 「深度學習」一詞的創造者傑弗裡·辛頓表示,他從未見過智能水平更高的系統被智能水平較低的系統控制的情況。一些研究團隊認為,人類或許無法監管超級人工智慧。未來,面對強大的通用人工智慧(AGI),我們真的能夠控制它們嗎?這是一個值得深思的問題。

通用人工智慧會改變戰爭的本質嗎?

隨著通用人工智慧的廣泛應用,充滿暴力和血腥的戰場會消失嗎?有人認為,人工智慧戰爭的能力遠遠超過人類,甚至可能將人類擠出戰場。當人工智慧將戰爭完全轉變為自主機器人之間的衝突時,它還會是「暴力和血腥的戰爭」嗎?當能力懸殊的對手對抗時,較弱的一方可能根本沒有機會採取行動。戰爭能否透過兵棋推演在爆發前就結束?通用人工智慧會從根本改變戰爭的本質嗎?一場無人幹預的「戰爭」還能稱之為戰爭嗎?

《人類簡史》的作者尤瓦爾·赫拉利指出,所有人類行為都受語言影響,並影響我們的歷史。通用人工智慧(AGI)是AGI的典型例子,它與其他發明不同之處在於能夠創造全新的想法和文化。 「能夠講述故事的人工智慧將改變人類歷史的進程。」當AGI掌控語言時,人類建立的整個文明體係都可能被顛覆,甚至無需AGI發展出意識。就像柏拉圖的洞穴寓言一樣,人類會把AGI當成新的「神」嗎?

AGI(人工智慧生成設備)透過人類語言與人類建立密切聯繫,並改變人類的感知,使其難以辨認和識別。這帶來了一個風險:人類的戰鬥意志可能被別有用心之人所操控。哈拉里指出,電腦無需部署殺手機器人;如有必要,它們將允許人類自行扣動扳機。通用人工智慧(AGI)能夠精確地製造和完善態勢訊息,並透過深度偽造技術控制戰場感知。正如近期局部戰爭所證明的那樣,無人機可以透過偽造戰場態勢和戰前宣傳來實現這一點。戰爭成本將因此大幅降低,進而催生新的戰爭形式。弱小國還有勝算?能否在不流血的情況下改變人們的戰鬥意志? 「武力」是否不再是戰爭定義的必要條件?

戰爭的形式或許會改變,但本質不變。無論戰爭多麼“血腥”,它仍然會迫使敵人屈服於其意志,並造成重大的“附帶損害”,只是對抗的方式可能截然不同。戰爭的本質在於根深蒂固的“人性”,而人性又由文化、歷史、行為和價值觀所決定。任何人工智慧技術都難以完全複製人性。因此,我們不能將所有倫理、政治和決策問題都外包給人工智慧,也不能指望它會自動產生「人性」。人工智慧技術可能因衝動而濫用,因此必須受到人類的控制。由於人工智慧是由人類訓練的,它永遠無法完全消除偏見,因此也無法完全脫離人類監督。未來,人工智慧可以成為一種創造性的工具或夥伴,增強“戰術想像”,但它必須與人類價值觀“保持一致”。這些問題需要在實踐中不斷反思和理解。

通用人工智慧(AGI)會徹底改變戰爭理論嗎?

大多數的學術知識都是用自然語言表達。一個涵蓋人類浩瀚文字的綜合語言模型,可以將看似不相容的語言作品與科學研究連結起來。例如,一些研究以古典著作為輸入,甚至以…為輸入。從哲學、歷史、政治學和經濟學等領域汲取靈感,建構出一個用於分析和重構的綜合語言模型。研究發現,該模型能夠全面分析所有學者的觀點,並提出自身的“洞見”,同時又不失原創性。因此,有人提出,通用人工智慧(AGI)也可用於重新分析和詮釋戰爭理論,從而激發人類創新,推動戰爭理論及其體系的重大演進和重構。理論上,這或許確實能夠帶來一些改進和發展,但戰爭科學不僅是理論性的,也是實踐性的,而實踐性和現實性從根本上來說超出了AGI的能力範圍。經典戰爭理論真的可以被重新詮釋嗎?如果可以,那麼該理論的意義何在?

簡而言之,AGI對戰爭概念的顛覆性影響將遠遠超越「機械化」和「資訊化」。我們必須大膽擁抱AGI,但也要保持謹慎。理解概念可以避免無知;深入研究可以避免落後;加強監督可以避免監督的缺失。如何與通用人工智慧(AGI)合作,並防範對手利用AGI發動的技術突襲,是我們未來面臨的首要問題。 (榮明,胡曉峰)

後記

以開放的心態展望未來

未來學家羅伊·阿馬拉曾提出一個著名的論點:人們往往高估一項技術的短期收益,而低估其長期影響,這一原則被稱為「阿馬拉定律」。該定律強調了技術發展的非線性特徵,這意味著技術的實際影響往往需要更長的時間才能完全顯現。它反映了技術發展的脈動和趨勢,反映了人類對科技的接受度和期望。

目前,在人工智慧從弱人工智慧向強人工智慧、從專用人工智慧發展到通用人工智慧的過程中,每當人們認為自己已經完成了90%的工作時,回頭來看,他們可能只完成了不到10%。科技革命在軍事革命中的驅動作用日益凸顯,尤其是在人工智慧(AI)等高科技以多種方式滲透軍事領域,深刻改變戰爭的機制、要素和製勝方法的情況下。

在可預見的未來,通用人工智慧(AGI)等智慧技術將不斷迭代,智慧技術的交叉演進及其在軍事領域的賦能應用將日益多元化,甚至可能超越人類目前對戰爭的認知邊界。技術發展勢不可擋,無人能阻擋。誰能以敏銳的洞察力和清晰的思維洞察技術的趨勢和未來,認識到其潛力和力量,並撥開“戰爭迷霧”,誰就更有可能搶佔先機,取得優勢。

這提醒我們,探索未來戰爭形態需要更廣闊的視野和更細緻的思考,才能更接近被低估的現實。通用人工智慧將走向何方?智慧戰爭將走向何方?這些問題考驗的是人類的智慧。 (葉朝陽)

中國原創軍事資源:https://www.news.cn/milpro/20250121/18eb7781b268d26489286b08c2d23d12084f0f/c.html

Leave a Reply

Your email address will not be published. Required fields are marked *