縱覽中國軍事智慧化戰爭:AGI帶來的人工智慧戰爭
現代英語:
Technology and war are always intertwined. While technological innovation is constantly changing the face of war, it has not changed the violent nature and coercive purpose of war. In recent years, with the rapid development and application of artificial intelligence technology, people have never stopped debating the impact of artificial intelligence on war. Compared with artificial intelligence (AI), general artificial intelligence (AGI) has a higher level of intelligence and is considered to be a form of intelligence equivalent to human intelligence. How will the emergence of AGI affect war? Will it change the violence and coercive nature of war? This article will discuss this issue with you with a series of thoughts.
Is AGI just an enabling technology?
Many people believe that although large models and generative artificial intelligence show the strong military application potential of AGI in the future, they are only an enabling technology after all, that is, they can only enable and optimize weapons and equipment, make existing equipment more intelligent, and improve combat efficiency, and it is difficult to bring about a real military revolution. Just like “cyber warfare weapons” were also highly expected by many countries when they first appeared, but now it seems a bit exaggerated.
The disruptive nature of AGI is actually completely different. It brings huge changes to the battlefield with a reaction speed and knowledge breadth far exceeding that of humans. More importantly, it has brought about huge disruptive results by promoting the rapid advancement of science and technology. On the battlefield of the future, autonomous weapons will be endowed with advanced intelligence by AGI, their performance will be generally enhanced, and they will become “strong at attack and difficult to defend” with their speed and cluster advantages. By then, the highly intelligent autonomous weapons that some scientists have predicted will become a reality, and AGI will play a key role in this. At present, the military application areas of artificial intelligence include autonomous weapons, intelligence analysis, intelligent decision-making, intelligent training, intelligent support, etc. These applications are difficult to simply summarize as “empowerment”. Moreover, AGI has a fast development speed and a short iteration cycle, and is in a state of continuous evolution. In future operations, AGI needs to be a priority, and special attention should be paid to the possible changes it brings.
Will AGI make war disappear?
Historian Geoffrey Blainey believes that “wars always occur because of misjudgments of each other’s strength or will”, and with the application of AGI in the military field, misjudgments will become less and less. Therefore, some scholars speculate that wars will decrease or disappear. In fact, relying on AGI can indeed reduce a large number of misjudgments, but even so, it is impossible to eliminate all uncertainties, because one of the characteristics of war is uncertainty. Moreover, not all wars are caused by misjudgments. Moreover, the inherent unpredictability and inexplicability of AGI, as well as people’s lack of experience in using AGI, will bring new uncertainties, making people fall into a thicker “fog of artificial intelligence”.
There are also rational problems with AGI algorithms. Some scholars believe that AGI’s mining and accurate prediction of important intelligence will have a dual impact. In actual operation, AGI does make fewer mistakes than humans, which can improve the accuracy of intelligence and help reduce misjudgments; but sometimes it may also make humans blindly confident and stimulate them to take risks. The offensive advantage brought by AGI leads to the best defense strategy being “preemptive strike”, which breaks the balance between offense and defense, triggers a new security dilemma, and increases the risk of war.
AGI has the characteristics of strong versatility and can be easily combined with weapons and equipment. Unlike nuclear, biological and chemical technologies, it has a low threshold for use and is particularly easy to spread. Due to the technological gap between countries, people are likely to use immature AGI weapons on the battlefield, which brings huge risks. For example, the application of drones in the latest local war practices has stimulated many small and medium-sized countries to start purchasing drones in large quantities. The low-cost equipment and technology brought by AGI are very likely to stimulate the occurrence of a new arms race.
Will AGI be the ultimate deterrent?
Deterrence is the ability to maintain a certain capability to intimidate an adversary from taking actions that go beyond its own interests. When deterrence is too strong to be used, it is the ultimate deterrence, such as the nuclear deterrence of mutually assured destruction. But what ultimately determines the outcome is “human nature,” which is the key that will never be missing in war.
Without the various trade-offs of “humanity”, will AGI become a formidable deterrent? AGI is fast but lacks empathy, is resolute in execution, and has an extremely compressed gaming space. AGI is a key factor on future battlefields, but it is difficult to accurately evaluate due to lack of practical experience, and it is easy to overestimate the opponent’s capabilities. In addition, in terms of autonomous weapon control, whether humans are in the loop and supervise the entire process, or are humans outside the loop and completely let go, this undoubtedly requires deep thought. Can the firing control of intelligent weapons be handed over to AGI? If not, the deterrent effect will be greatly reduced; if so, can the life and death of humans really be decided by machines that have nothing to do with them? In research at Cornell University, large war game simulation models often “suddenly use nuclear attacks” to escalate wars, even if they are in a neutral state.
Perhaps one day in the future, AGI will surpass humans in capabilities. Will we be unable to supervise and control it? Geoffrey Hinton, who proposed the concept of deep learning, said that he has never seen a case where something with a higher level of intelligence was controlled by something with a lower level of intelligence. Some research teams believe that humans may not be able to supervise super artificial intelligence. In the face of powerful AGI in the future, can we really control them? This is a question worth pondering.
Will AGI change the nature of war?
With the widespread use of AGI, will battlefields filled with violence and blood disappear? Some people say that AI warfare is far beyond the capabilities of humans and will push humans out of the battlefield. When AI turns war into a war fought entirely by autonomous robots, is it still a “violent and bloody war”? When opponents of unequal capabilities confront each other, the weak may not have the opportunity to act at all. Can wars be ended before the war through war games? Will AGI change the nature of war? Is an “unmanned” “war” still a war?
Yuval Noah Harari, author of Sapiens: A Brief History of Humankind, said that all human behavior is mediated by language and affects our history. The Big Language Model is a typical AGI. The biggest difference between it and other inventions is that it can create new ideas and culture. “Artificial intelligence that can tell stories will change the course of human history.” When AGI touches the control of language, the entire civilization system built by humans may be subverted, and it does not even need to generate consciousness in this process. Like Plato’s “Allegory of the Cave”, will humans worship AGI as a new “god”?
AGI establishes a close relationship with humans through human language and changes human perceptions, making it difficult for humans to distinguish and discern, thus posing the danger of the will to war being controlled by people with ulterior motives. Harari said that computers do not need to send out killer robots. If necessary, they will let humans pull the trigger themselves. AGI accurately creates and polishes situation information and controls battlefield cognition through deep fakes. It can use drones to fake battlefield situations and build public opinion before the war. This has been seen in recent local wars. The cost of war will be greatly reduced, leading to the emergence of a new form of war. Will small and weak countries still have a chance? Can the will to war be changed without bloodshed? Is “force” no longer a necessary condition for defining war?
The form of war may be changed, but the essence remains. Whether war is “bloody” or not, it will still force the enemy to obey its will and bring a lot of “collateral damage”, but the way of confrontation may be completely different. The essence of war lies in the “human nature” deep in the heart, and “human nature” is determined by culture, history, behavior and values, etc. It is difficult to completely replicate it with some artificial intelligence technology, so we cannot outsource all ethical, political and decision-making issues to artificial intelligence, and we cannot expect artificial intelligence to automatically generate “human nature”. Artificial intelligence technology may be abused due to passionate impulses, so it must be under human control. Since artificial intelligence is trained by humans, it will not always be free of bias, so they cannot be completely separated from human supervision. In the future, artificial intelligence can become a creative tool or partner to enhance “tactical imagination”, but it must be “aligned” with human values. These issues need to be constantly thought about and understood in practice.
Will AGI revolutionize the theory of war?
Most subject knowledge is expressed in natural language. The large language model, which is a collection of human writings, can connect language writings that are difficult to be compatible with scientific research. For example, some people input classical masterpieces and even philosophy, history, politics, economics, etc. into the large language model for analysis and reconstruction. It is found that it can not only conduct a comprehensive analysis of all scholars’ views, but also put forward its “own views” without losing originality. Therefore, some people say that it is also possible to re-analyze and interpret war theories through AGI, stimulate human innovation, and drive major evolution and reconstruction of war theories and systems? Perhaps there will be certain improvements and developments in theory, but war science is not only theoretical, but also practical, but practicality and reality are what AGI cannot do at all. Can the classic war theory really be reinterpreted? If so, what is the meaning of the theory?
In short, AGI’s subversion of the concept of war will far exceed “mechanization” and “informatization”. People should boldly embrace the arrival of AGI, but also be cautious. Understand the concept so as not to be ignorant; conduct in-depth research so as not to fall behind; strengthen supervision so as not to be negligent. How to learn to cooperate with AGI and guard against AGI technology raids by opponents is what we need to pay attention to first in the future. (Rong Ming and Hu Xiaofeng)
Afterword
Looking to the future with an open mind
Futurist Roy Amara has a famous assertion that people tend to overestimate the short-term benefits of a technology but underestimate its long-term impact, which is later called “Amara’s Law”. This law emphasizes the nonlinear characteristics of technological development, that is, the actual impact of technology often takes a longer time scale to fully manifest, reflecting the pulse and trend of technological development and embodying human acceptance and longing for technology.
At present, in the process of the development of artificial intelligence from weak artificial intelligence to strong artificial intelligence, and from special artificial intelligence to general artificial intelligence, every time people think that they have completed 90% of the journey, looking back, they may have only completed less than 10% of the journey. The driving role of the scientific and technological revolution in the military revolution is becoming more and more prominent, especially the multi-faceted penetration of high-tech represented by artificial intelligence technology into the military field, which has led to profound changes in the mechanism, elements and methods of winning wars.
In the foreseeable future, intelligent technologies such as AGI will not stop iterating, and the cross-evolution of intelligent technologies and their enabling applications in the military field will become more diversified, perhaps going beyond the boundaries of human cognition of existing war forms. The development of science and technology is unstoppable and unstoppable. Whoever can see the trend and future of science and technology, the potential and power of science and technology with a keen eye and a clear mind, and see through the “fog of war”, will be more likely to seize the initiative to win.
This reminds us that we should have a broader perspective and thinking when exploring the development of future war forms, so that we can get closer to the underestimated reality. Where is AGI going? Where is intelligent warfare going? This is a test of human wisdom.
[Editor: Wang Jinzhi]
現代國語:
AGI帶來的戰爭思考
編者按
科技與戰爭總是交織在一起,科技創新在不斷改變戰爭面貌的同時,並沒有改變戰爭的暴力性質和強迫性目的。近年來,隨著人工智慧技術的快速發展應用,人們關於人工智慧對戰爭影響的爭論從未停止。與人工智慧(AI)相比,通用人工智慧(AGI)的智慧程度更高,被認為是與人類智慧相當的智慧形式。 AGI的出現將如何影響戰爭,會不會改變戰爭的暴力性和強迫性?本文將帶著一系列思考與大家共同探討這個問題。
AGI只是賦能技術嗎
很多人認為,雖然大模型以及生成式人工智慧展現出未來AGI強大的軍事應用潛力,但它們畢竟只是一種賦能技術,即只能對武器裝備賦能優化,使現有裝備更加智能,提高作戰效率,難以帶來真正的軍事革命。就如同「網路戰武器」在剛出現時也曾被許多國家寄予厚望,但現在看來確實有點誇大。
AGI的顛覆性其實完全不同。它以遠超人類的反應速度和知識廣度為戰場帶來巨大改變。更重要的是,它透過促進科技的快速進步,湧現出巨大的顛覆性結果。未來戰場上,自主武器將被AGI賦予高級智能,性能得到普遍增強,並且憑藉其速度和集群優勢變得「攻強守難」。屆時,一些科學家曾預言的高智慧自主武器將成為現實,而AGI在其中扮演了關鍵性角色。目前,人工智慧的軍事化應用領域包括自主武器、情報分析、智慧決策、智慧訓練、智慧保障等,這些應用很難用「賦能」來簡單概括。而且,AGI發展速度快、迭代周期短,處於不斷進化的狀態。未來作戰,需要將AGI作為優先事項,格外注意其帶來的可能改變。
AGI會讓戰爭消失嗎
歷史學家杰弗裡·布萊尼認為“戰爭總是因為對各自力量或意願錯誤的判斷而發生”,而隨著AGI在軍事領域的應用,誤判將變得越來越少。因此,有學者推測,戰爭將隨之減少或消失。其實,依托AGI確實可以減少大量誤判,但即便如此,也不可能消除所有不確定性,因為戰爭的特徵之一就是不確定性。何況並非所有戰爭都因誤判而產生,而且,AGI固有的不可預測性、不可解釋性,以及人們對AGI使用經驗的缺乏,都會帶來新的不確定性,使人們陷入更加濃重的「人工智慧迷霧」之中。
AGI演算法還存在理性難題。有學者認為,AGI對重大情報的挖掘和精確預測,會帶來雙重影響。 AGI在實際操作層面,確實比人類犯錯少,能夠提高情報準確性,有利於減少誤判;但有時也可能會使人類盲目自信,刺激其鋌而走險。 AGI帶來的進攻優勢,導致最佳防禦戰略就是“先發制人”,打破了進攻與防禦的平衡,引發了新型安全困境,反而增加了戰爭爆發的風險。
AGI具有通用性強的特點,容易與武器裝備結合。與核子、生化等技術不同,它使用門檻低,特別容易擴散。由於各國之間存在技術差距,導致人們很可能將不成熟的AGI武器運用於戰場,帶來巨大風險。例如,無人機在最新局部戰爭實務的應用,就刺激許多中小國家開始大量採購無人機。 AGI帶來的低成本裝備和技術,極有可能刺激新型軍備競賽的發生。
AGI會是終極威懾嗎
威懾是維持某種能力以恐嚇對手使其不採取超越自身利益的行動。當威懾強大到無法使用時就是終極威懾,例如確保相互摧毀的核威懾。但最終決定結果的卻是“人性”,這是戰爭永遠不會缺少的關鍵。
如果沒有了「人性」的各種權衡,AGI是否會成為令人生畏的威懾? AGI速度很快但缺乏同理心,執行堅決,博弈空間被極度壓縮。 AGI是未來戰場的關鍵性因素,但因缺乏實務經驗很難進行準確評估,很容易高估對手能力。此外,在自主武器控制方面,是人在環內、全程監督,還是人在環外、完全放手,這無疑需要深思。智慧化武器的開火控制權能交給AGI嗎?如果不能,威懾效果將大打折扣;如果能,人類的生死就真的可以交由與其無關的機器來決定?在康乃爾大學的研究中,兵棋推演大模型經常「突然使用核攻擊」升級戰爭,即使處於中立狀態。
或許未來某一天,AGI會在能力上超過人類,我們是不是就無法對其進行監管控制了?提出深度學習概念的傑弗裡·辛頓說,從沒見過更高智能水平的東西被更低智能水平的東西控制的案例。有研究團隊認為,人類可能無法監督超級人工智慧。未來面對強大的AGI,我們真的能夠控制住它們嗎?這是一個值得人們深思的問題。
AGI會改變戰爭本質嗎
隨著AGI的大量運用,充滿暴力和血腥的戰場會不會消失?有人說,人工智慧戰爭遠超過人類能力範圍,反而會將人類推到戰場之外。當人工智慧將戰爭變成全部由自主機器人對抗時,那它還是「暴力和血腥的戰爭」嗎?當能力不對等的對手對抗時,弱者可能根本沒有行動的機會,戰爭是不是透過兵棋推演就可以在戰前被結束? AGI會因此改變戰爭的本質嗎? 「無人」的「戰爭」還是戰爭嗎?
《人類簡史》作者尤瓦爾·赫拉利說,人類的一切行為都透過語言作為中介並影響我們的歷史。大語言模型是一種典型的AGI,它與其他發明最大的不同在於可以創造全新的想法和文化,「會說故事的人工智慧將改變人類歷史的進程」。當AGI觸及對語言的掌控時,人類所建構的整個文明體係就可能被顛覆,在這個過程中甚至不需要其產生意識。如同柏拉圖的“洞穴寓言”,人類會不會將AGI當成新的“神明”加以膜拜?
AGI透過人類語言和人類建立親密關係,並改變人類的看法,使人類難以區分和辨別,從而存在戰爭意志被別有用心之人控制的危險。赫拉利說,電腦不需要派出殺手機器人,如果真的需要,它會讓人類自己扣下板機。 AGI精準製造和打磨態勢訊息,透過深度偽造控制戰場認知,既可用無人機對戰場態勢進行偽造,也可以在戰前進行輿論造勢,在近幾場局部戰爭中已初見端倪。戰爭成本會因此大幅下降,導致新的戰爭形態產生,小國弱國還會有機會嗎?戰爭意志是否可以不用流血就可改變,「武力」是否不再是戰爭定義的必要條件?
戰爭形態或被改變,但本質仍在。無論戰爭是否“血腥”,其仍會強迫敵人服從自己的意志並帶有大量“附帶損傷”,只不過對抗方式可能會完全不同。戰爭本質在於內心深處的“人性”,而“人性”是由文化、歷史、行為和價值觀等決定的,是很難用某種人工智能技術完全復刻出來的,所以不能將倫理、政治和決策問題全部外包給人工智能,更不能期望人工智能會自動產生“人性”。人工智慧技術可能會因激情衝動而被濫用,所以必須在人類掌控之中。既然人工智慧是人類訓練的,它就不會永遠都沒有偏見,所以它們就無法完全脫離人類的監督。在未來,人工智慧可以成為有創意的工具或夥伴,增強“戰術想像力”,但必須“對齊”人類的價值觀。這些問題需要在實踐中不斷地去思考和理解。
AGI會顛覆戰爭理論嗎
大多數的學科知識是用自然語言表達的。集人類著述之大成的大語言模型,可以將很難相容的語言著述與科學研究連結起來。例如,有人將古典名著甚至哲學、歷史、政治、經濟學等輸入大語言模型,進行分析重構。發現它既可以對所有學者觀點進行全面分析,也可以提出它“自己的見解”,而且不失創見。因此有人說,是否也可以透過AGI對戰爭理論重新加以分析解釋,激發人類創新,以驅使戰爭理論及體系發生重大演化與重構?也許從理論上確實會有一定的改進和發展,但戰爭科學不僅具有理論性,而且還具有實踐性,但實踐性、現實性卻是AGI根本做不到的。經典戰爭理論真的可以重新詮釋嗎?若是,則理論的意義何在?
總之,AGI對戰爭概念的顛覆將遠超越「機械化」與「資訊化」。對於AGI的到來,人們既要大膽擁抱,也要心存謹慎。理解概念,不至於無知;深入研究,不致於落伍;強化監管,不致於失察。如何學習與AGI合作,防範對手AGI技術突襲,是我們未來首先需要關注的事情。 (榮明 胡曉峰)
編 後
以開闊思維前瞻未來
未來學家羅伊·阿瑪拉有一個著名論斷,人們總是傾向於高估一項技術帶來的短期效益,卻又低估了它的長期影響,後被稱作“阿瑪拉定律”。這個定律,強調了科技發展的非線性特徵,即科技的實際影響往往需要在更長的時間尺度上才能完全顯現,反映了科技發展的脈動與趨勢,體現人類對科技的接納與憧憬。
目前,人工智慧由弱人工智慧到強人工智慧、由專用人工智慧到通用人工智慧的發展過程中,每次人們認為已走完全程的90%時,回首一看,可能才剛到全程的10%。科技革命對軍事革命驅動作用愈發凸顯,尤其是以人工智慧技術為代表的高新技術多方位向軍事領域滲透,使得戰爭制勝機理、制勝要素、制勝方式正在發生深刻演變。
在可以預見的未來,AGI等智慧化技術不會停止迭代的步伐,而智慧化技術交叉演化以及在軍事領域的賦能應用等都將趨於多元化,或許會跳脫出人類對現有戰爭形態認知的邊界。科技的發展已勢不可擋、也無人能擋,誰能以敏銳的眼光、清醒的頭腦,看清科技的趨勢和未來、看到科技的潛質和威力,洞穿“戰爭迷霧”,誰就更有可能搶佔制勝先機。
這提醒著人們,對於未來戰爭形態發展的探索應持更開闊的視野和思維,才可能更接近被低估的現實。 AGI向何處去?智能化戰爭往何處去?這考驗著人類的智慧。 (野鈔洋)
【責任編輯:王金志】
中國原創軍事資源:http://www.news.cn/milpro/20250121/1eb771b26d264926b0c2d23d12084f0f888/c.html