中國情報戰概覽:AGI帶來的戰爭考量
現代英語:
Technology and war have always been intertwined. While technological innovation constantly changes the face of war, it hasn’t altered its violent nature and coercive objectives. In recent years, with the rapid development and application of artificial intelligence (AI) technology, the debate about its impact on war has never ceased. Compared to artificial intelligence (AI), artificial general intelligence (AGI) is considered to be a higher level of intelligence, comparable to human intelligence. How will the emergence of AGI affect war? Will it change the violent and coercive nature of war? This article will explore this question with a series of reflections.
Is AGI just an enabling technology?
Many believe that while large-scale models and generative artificial intelligence (AGI) demonstrate great potential for future military applications, they are ultimately just enabling technologies. They can only enhance and optimize weapons and equipment, making existing equipment smarter and improving combat efficiency, but they are unlikely to bring about a true military revolution. Just like “cyber warfare weapons,” which were once highly anticipated by many countries when they first appeared, now seem somewhat exaggerated.
The disruptive nature of AGI is entirely different. It brings tremendous changes to the battlefield with reaction speeds and knowledge far exceeding those of humans. More importantly, it produces enormous disruptive results by accelerating technological progress. On the future battlefield, autonomous weapons will be endowed with advanced intelligence by AGI, their performance will be universally enhanced, and they will become “strong in offense and difficult in defense” due to their speed and swarm advantages. At that time, the highly intelligent autonomous weapons predicted by some scientists will become a reality, and AGI will play a key role in this. Currently, the military applications of artificial intelligence include autonomous weapons, intelligence analysis, intelligent decision-making, intelligent training, and intelligent support, which are difficult to summarize simply as “empowerment.” Moreover, AGI develops rapidly, has a short iteration cycle, and is in a state of continuous evolution. In future operations, AGI needs to be prioritized, and special attention should be paid to the potential changes it brings.
Will AGI make wars disappear?
Historian Jeffrey Breeny argues that “wars always occur due to misjudgments of each other’s strength or will,” and that with the application of AGI in the military field, misjudgments will become increasingly rare. Therefore, some scholars speculate that wars will decrease or even disappear. Indeed, relying on AGI can significantly reduce misjudgments, but even so, it’s impossible to eliminate all uncertainty, as uncertainty is a defining characteristic of war. Moreover, not all wars arise from misjudgments, and the inherent unpredictability and inexplicability of AGI, along with people’s lack of experience using AGI, will bring new uncertainties, plunging people into an even deeper “artificial intelligence fog.”
AGI algorithms also present rational challenges. Some scholars believe that AGI’s ability to mine and accurately predict critical intelligence has a dual impact. In practical operation, AGI does indeed make fewer mistakes than humans, improving intelligence accuracy and reducing misjudgments; however, it can sometimes lead to overconfidence and reckless actions. The offensive advantage brought by AGI results in the best defensive strategy being “preemptive strike,” disrupting the balance between offense and defense, creating a new security dilemma, and ultimately increasing the risk of war.
AGI (Automatic Genomics) is highly versatile and easily integrated with weaponry. Unlike nuclear, biological, and chemical technologies, it has a low barrier to entry and is particularly prone to proliferation. Due to technological gaps between countries, immature AGI weapons could potentially be deployed on the battlefield, posing significant risks. For example, the application of drones in recent local conflicts has spurred many small and medium-sized countries to begin large-scale drone procurement. The low-cost equipment and technology offered by AGI could very well stimulate a new arms race.
Will AGI be the ultimate deterrent?
Deterrence is the maintenance of a capability to intimidate an adversary into refraining from actions that exceed one’s own interests. Ultimate deterrence occurs when it becomes so powerful as to be unusable, such as nuclear deterrence that ensures mutual destruction. But ultimately, the deciding factor is “human nature,” a crucial element that will never be absent from war.
Without the considerations of “humanity,” would AGI become a formidable deterrent? AGI is fast but lacks empathy; its resolute execution severely compresses the strategic space. AGI is a key factor on the future battlefield, but due to a lack of practical experience, accurate assessment is difficult, easily leading to overestimation of the adversary’s capabilities. Furthermore, regarding autonomous weapon control, whether to have humans within the system for full-time supervision or to leave it entirely to the outside world requires careful consideration. Should the firing control of intelligent weapons be handed over to AGI? If not, the deterrent effect will be greatly diminished; if so, can the life and death of humanity truly be decided by machines unrelated to them? Research at Cornell University shows that large-scale wargaming models frequently escalate wars with “sudden nuclear attacks,” even when in a neutral state.
Perhaps one day in the future, AGI will surpass human capabilities. Will we then be unable to regulate and control it? Jeffrey Hinton, who proposed the concept of deep learning, said he has never seen a case where something with a higher level of intelligence was controlled by something with a lower level of intelligence. Some research teams believe that humans may not be able to supervise super artificial intelligence. Faced with powerful AGI in the future, will we really be able to control them? This is a question worth pondering.
Will AGI change the nature of war?
With the widespread use of AGI, will battlefields filled with violence and bloodshed disappear? Some argue that AI warfare far exceeds human capabilities and may even push humanity off the battlefield. When AI transforms warfare into a conflict entirely between autonomous robots, will it still be a “violent and bloody war”? When unequal adversaries clash, the weaker party may have no chance to act. Can wars be ended before they even begin through war games? Will AGI change the nature of warfare as a result? Is a “war” without humans still a war?
Yuval Noah Harari, author of Sapiens: A Brief History of Humankind, states that all human behavior is mediated by language and influences our history. The Large Language Model (AGI) is a typical example of AGI, differing from other inventions in its ability to create entirely new ideas and cultures; “storytelling AI will change the course of human history.” When AGI gains control over language, the entire system of human civilization could be overturned, without even requiring its own consciousness. Like Plato’s Allegory of the Cave, will humanity worship AGI as a new “god”?
AGI establishes a close relationship with humans through human language and alters their perceptions, making them difficult to distinguish and discern, thus posing a risk that the will to fight could be controlled by those with ulterior motives. Harari stated that computers don’t need to send out killer robots; if necessary, they will allow humans to pull the trigger themselves. AGI precisely manufactures and refines situational information, controlling battlefield perception through deep deception. This can be achieved through drones to fabricate battlefield situations and through pre-war public opinion manipulation, as already evident in recent local conflicts. The cost of war would thus decrease significantly, leading to the emergence of new forms of warfare. Would small and weak nations still have a chance? Can the will to fight be changed without bloodshed? Is “force” no longer a necessary condition for defining war?
The form of war may change, but its essence remains. Regardless of how “bloody” war is, it will still force the enemy to submit to its will and inflict significant “collateral damage,” only the methods of resistance may be entirely different. The essence of war lies in the deep-seated “human nature,” which is determined by culture, history, behavior, and values. It is difficult to completely replicate using any artificial intelligence technology, so we cannot outsource all ethical, political, and decision-making issues to AI, nor can we expect AI to automatically generate “human nature.” AI technology may be abused due to impulsive passions, so it must be under human control. Since AI is trained by humans, it will not always be without bias, therefore it cannot be completely free from human oversight. In the future, artificial intelligence can become a creative tool or partner, enhancing “tactical imagination,” but it must be “aligned” with human values. These issues need to be continuously considered and understood in practice.
Will AGI subvert war theory?
Most academic knowledge is expressed in natural language. A comprehensive language model, which integrates the best of human writing, can connect seemingly incompatible linguistic works with scientific research. For example, some have input classical works, and even works from philosophy, history, political science, and economics, into a comprehensive language model for analysis and reconstruction. They have found that it can comprehensively analyze all scholars’ viewpoints and also offer its own “insights,” without sacrificing originality. Therefore, some have asked whether it is possible to re-analyze and interpret war theory through AGI, stimulating human innovation and driving a major evolution and reconstruction of war theory and its systems. Perhaps there would indeed be some theoretical improvements and developments, but war science is not only theoretical but also practical, and AGI simply cannot achieve this practicality and realism. Can classical war theory really be reinterpreted? If so, what is the significance of the theory?
In short, AGI’s disruption of the concept of warfare will far exceed that of “mechanization” and “informatization.” We must embrace AGI boldly, yet remain cautious. Understanding the concept prevents ignorance; in-depth research prevents falling behind; and strengthened oversight prevents oversight. How to cooperate with AGI and guard against adversaries’ AGI technological surprise attacks is our primary concern for the future.
After editing
Look to the future with an open mind
■Ye Chaoyang
Futurist Roy Amalra famously asserted that people tend to overestimate the short-term benefits of a technology while underestimating its long-term impact, a principle known as “Amalra’s Law.” This law emphasizes the non-linear nature of technological development, meaning that the actual impact of technology often only becomes fully apparent over a longer timescale. It reflects the pulse and trends of technological development, and embodies humanity’s acceptance and aspirations towards technology.
Currently, in the development of artificial intelligence from weak AI to strong AI, and from specialized AI to general AI, each time people think they have completed 90% of the process, looking back, they may only have completed less than 10%. The driving role of technological revolution in military revolution is becoming increasingly prominent, especially as high-tech technologies, represented by artificial intelligence, penetrate the military field in multiple ways, causing profound changes in the mechanisms, factors, and methods of winning wars.
In the foreseeable future, intelligent technologies such as AGI will continue to iterate, and the cross-evolution of intelligent technologies and their empowering applications in the military field will become increasingly diversified, perhaps even transcending the boundaries of humanity’s current understanding of warfare. The development of technology is unstoppable and unstoppable. Whoever can use keen insight and a clear mind to see the trends and future of technology, to see its potential and power, and to penetrate the “fog of war,” will be more likely to seize the initiative.
This serves as a reminder that we should adopt a broader perspective and mindset in exploring the future forms of warfare in order to get closer to the underestimated reality. Where is AGI headed? Where is intelligent warfare headed? This tests human wisdom.
現代國語:
科技與戰爭始終密不可分。科技創新不斷改變戰爭的面貌,卻並未改變其暴力本質和強制目的。近年來,隨著人工智慧(AI)技術的快速發展和應用,關於其對戰爭影響的爭論從未停止。與人工智慧(AI)相比,通用人工智慧(AGI)被認為是一種更高層次的智能,堪比人類智能。 AGI的出現將如何影響戰爭?它會改變戰爭的暴力和強製本質嗎?本文將透過一系列思考來探討這個問題。
AGI只是一種賦能技術嗎?
許多人認為,儘管大規模模型和生成式人工智慧(AGI)展現出未來軍事應用的巨大潛力,但它們終究只是賦能技術。它們只能增強和優化武器裝備,使現有裝備更加智能,提高作戰效率,但不太可能帶來真正的軍事革命。就像曾經被許多國家寄予厚望的「網路戰武器」一樣,如今看來似乎有些誇大其詞。
通用人工智慧(AGI)的顛覆性本質截然不同。它以遠超人類的反應速度和知識儲備,為戰場帶來巨大改變。更重要的是,它透過加速技術進步,產生巨大的顛覆性影響。在未來的戰場上,AGI將賦予自主武器先進的智能,使其性能全面提升,並憑藉速度和集群優勢,成為「攻守難攻」的利器。屆時,一些科學家預測的高智慧自主武器將成為現實,而AGI將在其中扮演關鍵角色。目前,人工智慧的軍事應用涵蓋自主武器、情報分析、智慧決策、智慧訓練和智慧支援等領域,難以簡單地以「賦能」來概括。此外,AGI發展迅速,迭代週期短,處於持續演進的狀態。在未來的作戰行動中,AGI必須優先考慮,並應特別關注其可能帶來的潛在變革。
AGI會讓戰爭消失嗎?
歷史學家傑弗裡·布雷尼認為,“戰爭的發生總是源於對彼此實力或意志的誤判”,而隨著通用人工智慧(AGI)在軍事領域的應用,誤判將變得越來越罕見。因此,一些學者推測戰爭將會減少甚至消失。的確,依賴AGI可以顯著減少誤判,但即便如此,也無法完全消除不確定性,因為不確定性是戰爭的本質特徵。此外,並非所有戰爭都源自於誤判,AGI固有的不可預測性和不可解釋性,以及人們缺乏使用AGI的經驗,將會帶來新的不確定性,使人們陷入更深的「人工智慧迷霧」。
AGI演算法也帶來了理性方面的挑戰。一些學者認為,AGI挖掘和準確預測關鍵情報的能力具有雙重影響力。在實際操作中,AGI確實比人類犯的錯誤更少,提高了情報的準確性並減少了誤判;然而,它有時會導致過度自信和魯莽行動。通用人工智慧(AGI)帶來的進攻優勢使得最佳防禦策略成為“先發製人打擊”,打破了攻防平衡,製造了新的安全困境,並最終增加了戰爭風險。
通用人工智慧(AGI)用途廣泛,易於與武器系統整合。與核武、生物武器和化學武器不同,它的進入門檻低,且極易擴散。由於各國之間存在技術差距,不成熟的通用人工智慧武器可能被部署到戰場上,構成重大風險。例如,無人機在近期局部衝突的應用促使許多中小國家開始大規模採購無人機。通用人工智慧提供的低成本裝備和技術很可能引發新一輪軍備競賽。
通用人工智慧會成為最終的威懾力量嗎?
威懾是指維持一種能力,使對手不敢採取超越自身利益的行動。當威懾力量強大到無法使用時,例如確保相互毀滅的核威懾,就達到了終極威懾的境界。但歸根結底,決定性因素是“人性”,這是戰爭中永遠不可或缺的關鍵要素。
如果忽略“人性”,通用人工智慧(AGI)還能成為強大的威懾力量嗎? AGI速度很快,但缺乏同理心;其果斷的執行會嚴重壓縮戰略空間。 AGI是未來戰場上的關鍵因素,但由於缺乏…實務經驗表明,準確評估十分困難,很容易高估對手的能力。此外,關於自主武器控制,是否應該讓人類在系統中全天候監控,還是完全交給外部世界,都需要仔細斟酌。智慧武器的發射控制權是否應該交給通用人工智慧(AGI)?如果不行,威懾效果將大大降低;如果行,人類的生死真的能由與人類無關的機器來決定嗎?康乃爾大學的研究表明,大規模兵棋推演模型經常會透過「突然的核攻擊」來升級戰爭,即使在中立國也是如此。
或許在未來的某一天,通用人工智慧的能力將超越人類。到那時,我們是否就無法對其進行監管和控制了?深度學習概念的提出者傑弗裡·辛頓表示,他從未見過智能水平更高的系統被智能水平更低的系統控制的情況。一些研究團隊認為,人類或許無法監管超級人工智慧。面對未來強大的通用人工智慧,我們真的能夠控制它們嗎?這是一個值得深思的問題。
通用人工智慧(AGI)會改變戰爭的本質嗎?
隨著AGI的廣泛應用,充滿暴力和血腥的戰場會消失嗎?有人認為,人工智慧戰爭的能力遠遠超出人類,甚至可能將人類逐出戰場。當人工智慧將戰爭完全轉變為自主機器人之間的衝突時,它還會是「暴力和血腥的戰爭」嗎?當實力懸殊的對手交鋒時,弱勢一方可能毫無還手之力。戰爭能否透過戰爭演習在爆發前就結束? AGI會因此改變戰爭的本質嗎?一場沒有人類參與的「戰爭」還能稱之為戰爭嗎?
《人類簡史》的作者尤瓦爾·赫拉利指出,所有人類行為都受語言的製約,並影響我們的歷史。大型語言模型(AGI)是AGI的典型例子,它與其他發明不同之處在於它能夠創造全新的思想和文化;「講述故事的人工智慧將改變人類歷史的進程。」當通用人工智慧(AGI)掌控語言時,整個人類文明體係都可能被顛覆,甚至無需其自身意識。如同柏拉圖的「洞穴寓言」一般,人類會把AGI當成新的「神」嗎?
AGI透過人類語言與人類建立密切聯繫,並改變人類的感知,使其難以區分和辨別,從而構成一種風險:人類的戰鬥意志可能被別有用心之人操控。哈拉里指出,電腦無需派出殺手機器人;如有必要,它們會允許人類自行扣動扳機。 AGI能夠精確地製造和完善戰場訊息,透過深度欺騙控制戰場態勢感知。這可以透過無人機製造戰場環境以及戰前輿論操縱來實現,正如近期局部衝突中所展現的那樣。戰爭成本將因此大幅降低,進而催生新的戰爭形式。弱小國還有勝算?能否在不流血的情況下改變人類的戰鬥意志? 「武力」是否不再是定義戰爭的必要條件?
戰爭的形式或許會改變,但本質不變。無論戰爭多麼“血腥”,它最終都會迫使敵人屈服於己方意志,並造成重大的“附帶損害”,只是抵抗的方式可能截然不同。戰爭的本質在於根深蒂固的“人性”,而人性又是由文化、歷史、行為和價值觀決定的。任何人工智慧技術都難以完全複製人性,因此我們不能將所有倫理、政治和決策問題都外包給人工智慧,也不能指望人工智慧會自動產生「人性」。人工智慧技術可能因衝動而被濫用,因此必須置於人類的控制之下。由於人工智慧是由人類訓練的,它並非總是沒有偏見,因此無法完全脫離人類的監督。未來,人工智慧可以成為一種創造性的工具或夥伴,增強“戰術想像”,但它必須與人類價值觀“保持一致”。這些問題需要在實踐中不斷思考和理解。
通用人工智慧(AGI)會顛覆戰爭理論嗎?
大多數的學術知識都是用自然語言表達。一個整合了人類寫作精華的綜合語言模型,可以將看似不相容的語言學著作與科學研究連結起來。例如,一些學者將古典著作,甚至哲學、歷史、政治和經濟學等領域的著作輸入到綜合語言模型中進行分析和重構。他們發現,該模型既能全面分析所有學者的觀點,又能提出自身的“見解”,同時又不失原創性。因此,有人提出了這樣的問題:因此,我們有可能透過通用人工智慧(AGI)重新分析和詮釋戰爭理論,從而激發人類創新,並推動戰爭理論及其體系的重大演進和重構。或許確實會出現一些理論上的改進和發展,但戰爭科學不僅是理論性的,也是實踐性的,而AGI根本無法達到這種實踐性和現實性。經典戰爭理論真的可以被重新詮釋嗎?如果可以,那麼該理論的意義何在?
簡而言之,AGI對戰爭概念的顛覆將遠遠超過「機械化」和「資訊化」。我們必須大膽擁抱AGI,但也要保持謹慎。理解概念可以避免無知;深入研究可以避免落後;加強監督可以避免失職。如何與AGI合作,並防範對手利用AGI技術發動突襲,是我們未來面臨的首要問題。
編輯後
以開放的心態展望未來
■葉朝陽
未來學家羅伊·阿瑪拉曾提出著名的“阿瑪拉定律”,指出人們往往高估一項技術的短期收益,而低估其長期影響。該定律強調技術發展的非線性特徵,意味著技術的實際影響往往需要更長的時間才能完全顯現。它反映了技術發展的脈動和趨勢,反映了人類對科技的接受度和期望。
目前,在人工智慧從弱人工智慧向強人工智慧、從專用人工智慧發展到通用人工智慧的過程中,人們每次認為自己已經完成了90%的工作,回首往事,可能才完成了不到10%。科技革命在軍事革命中的驅動作用日益凸顯,尤其是在以人工智慧為代表的高科技以多種方式滲透軍事領域,深刻改變戰爭的機制、因素和取勝之道的情況下。
在可預見的未來,通用人工智慧(AGI)等智慧技術將不斷迭代發展,智慧科技的交叉演進及其在軍事領域的賦能應用將日益多元化,甚至可能超越人類目前對戰爭的認知邊界。技術的發展勢不可擋。誰能以敏銳的洞察力和清晰的思維洞察技術的趨勢和未來,看到其潛力和力量,並撥開戰爭迷霧,誰就更有可能掌握主動權。
這提醒我們,在探索未來戰爭形態時,應採取更廣闊的視野和思維方式,才能更接近被低估的現實。通用人工智慧將走向何方?智慧戰爭將走向何方?這考驗著人類的智慧。
來源:中國軍事網-解放軍報 作者:榮明、胡曉峰 編輯:吳明奇 發佈時間:2025-01-21 07:xx:xx