中國軍方將區分人工智慧在戰爭中的作用和功能
現代英語:
This article reviews the article “Foresight and Judgment: Why Artificial Intelligence Enhances the Importance of Humans in Future Wars” published in the journal “International Security”. It explores the contextual challenges faced by artificial intelligence in the process of war strategic decision-making, as well as the difficulty and uncontrollability of artificial intelligence’s participation in prediction and judgment in a war environment. It analyzes the common decision-making process and characteristics of artificial intelligence in military decision-making, and points out the important role played by human factors.
In recent years, artificial intelligence has developed rapidly and has been widely used in many fields such as business, logistics, communications, transportation, education, communication, translation, etc. The military field also attaches great importance to it. A large number of studies and practices have shown that artificial intelligence can generally replace human work in many positions. Therefore, using artificial intelligence to carry out military operations and dominate all actions in future wars has become the goal of artificial intelligence in the military field. Future wars are essentially wars of artificial intelligence. Avi Goldfarb and Jon R. Lindsay pointed out in the article “Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War” that in future wars, artificial intelligence cannot replace humans. Artificial intelligence has not weakened the role of humans, but has increased the importance of humans in war. The author believes that artificial intelligence supported by pure machines cannot solve the problems in current and future wars, mainly due to data quality issues and the difficulty of judgment. Coupled with the opponent’s cover-up, deception and interference, the role of artificial intelligence supported by pure machines in future wars will be greatly reduced. The two authors mainly discussed four main aspects: strategic context, artificial intelligence in war, the performance of artificial intelligence in military decision-making, and discussion and reflection on the strategic significance of military artificial intelligence. They discussed that artificial intelligence still cannot replace pure artificial intelligence in current and future wars. On the contrary, the role of humans will still be important in future wars. The analysis process and main points are as follows. In order to facilitate direct evaluation of relevant views, we also gave corresponding comments after the views of all parties.
The strategic context of military organizational decision-making poses a huge challenge to artificial intelligence
The author points out that the decision-making of military organizations will be affected by many factors. Generally speaking, it may manifest as follows: (1) Political context: The political context is mainly manifested in the strategic environment, facility conditions and psychological preferences; (2) Technical context: The rapid advancement of machine learning can complete more accurate, complex, convenient and larger-scale forecasts including image recognition and navigation; (3) Decision-making process: This process mainly involves the objective facts of goals, values, and environment and the reasoning extracted from them, that is, a process of judgment, data and prediction; (4) Division of labor between man and machine: The application of artificial intelligence is a function of data quality and judgment difficulty. The quality of data and the clarity or difficulty of judgment determine the relative advantages of man and machine in decision-making.
It should be said that the author has grasped the main macro-contextual factors that artificial intelligence faces in the process of participating in military decision-making, taking on specific military roles, completing various military tasks, and realizing strategic and campaign intentions. Political context is often the most difficult condition for artificial intelligence to grasp. International politics and domestic politics, especially the instability of international diplomatic relations, the sudden changes in international politics, the stability and mutation of domestic politics, the unpredictability of changes in international geography and natural environment, and the psychological changes of international and domestic personnel are difficult for artificial intelligence to grasp. In terms of technology, although artificial intelligence has developed rapidly, it cannot be separated from its high dependence on data, which makes technological development equivalent to the basic fact in physics, that is, no matter how fast an object moves, it cannot exceed the speed of light. The decision-making process is the most important aspect of artificial intelligence participating in military decision-making and affecting future wars, and it is also the most complex process of military command under the background of war. However, at present, no army of any country or commander of any army can say so confidently that artificial intelligence can make all aspects of decision-making as rational as humans. In the face of huge amounts of data, the biggest advantage of artificial intelligence is computing. However, the prerequisite for humans is that some data does not need to be calculated and conclusions can be drawn by intuition. Moreover, decision-making and command often reflect the commander’s higher wisdom and art. The context of human-machine division of labor actually makes us more aware that more data will be used in war decisions in the future. Humans can hand over the decision-making power of certain matters to artificial intelligence, and necessary decisions must still be made by humans. The actual stage of human-machine division of labor is the harmonious division of labor and human-machine collaboration, especially the emphasis on rationality, humanity, morality and ethics of war by humans.
The unreliability of artificial intelligence in prediction and judgment during war
(1) Uncontrollable data in the strategic environment inevitably affects predictions: This may be reflected in the data itself and in the acquisition and use of data. The more prominent manifestations in data are: data falsification, data restriction, data control, data invalidity, and inability to analyze. The main manifestations in the source of data and data analysis are: there are many data sources and it is difficult to predict; data analysis is limited by technology; the scope of data continues to expand with the development of the network, diluting effective data; network systems and software are susceptible to interference from multiple parties; hackers and multiple parties harassment; conflicts among multiple technologies.
(2) Military management judgment cannot be separated from human participation: Artificial intelligence faces many challenges in the process of participating in military management. First, military management judgment is a highly subjective issue. Second, the use of machine learning to complete this calculation process is also inevitably affected by human judgment. Third, the function used by AI has clear goals, and all relevant parties are guided by common goals to reach a consensus and exert the leadership and command of the troops. The command of the army often faces different military services, branches, and units. Their respective skills, tactics, capabilities, and cognition will be different. When artificial intelligence is used to solve these collective action problems, huge disputes are inevitable, which often makes the problem worse.
In this section, the author points out two fatal weaknesses that artificial intelligence must face in participating in military command, and at least cannot be solved at present: one is that the reliability of data is difficult to guarantee, and the other is the problem of human participation. Regarding the reliability of data, in the course of war, there are often a lot of data that are difficult to distinguish between true and false. In addition to the controllability of data, as an opponent or a third party, they may intentionally control certain aspects of data, and the data provided may also be arranged with special content and logical relationships. It is even possible to intentionally distort the data and provide irrational scattered data, making the data analysis results irrelevant and unable to draw effective conclusions, thus losing the ability to judge. Humans will not solve the problem that human participation is necessary in the judgment process of artificial intelligence for a long time in the future. The current artificial intelligence is designed by humans. Although it can be trained and optimized through a large amount of data, humans do not allow artificial intelligence to break away from the regulations and constraints of humans in advance. Artificial intelligence is completely determined by its own design, optimization and upgrade. Considering that the military decision-making process is full of variables, it is impossible to completely hand over a military decision-making process to artificial intelligence. What artificial intelligence can accomplish is to automatically transmit data and analyze large quantities of data and provide results. If general management decisions can be handed over to artificial intelligence, then the real key decisions still need to be made manually. In fact, considering the decision-making of military management, especially the more complex, challenging and controlled decision-making and process in the war environment, artificial intelligence still has a long way to go to perfectly reflect the personal decision-making charm and intention of the commander, and to fully realize the collective integrated action of the army and the personalized command of diversified military services. Under human war conditions, each combatant, especially the end and senior commander of the combatant, has many variables in the execution of the war. For example, changes in wind, rain, ice and snow, rivers, lakes and seas, fighting will, road conditions, transportation capacity, production operation, material supply, etc. often lead to emergencies. Therefore, the actual battlefield often has more variables than design. In the many judgments of military management and battlefield decision-making, even under the conditions of future intelligent combat, human participation will still be dominant.
Artificial intelligence has limited involvement in military decision-making tasks
The article points out that artificial intelligence embodies four decision-making processes in the military decision-making task mechanism, and also embodies four corresponding decision-making characteristics, which are mainly manifested as follows.
Automated decision-making process: The best example of AI performance is “automated decision-making”. First, it can reduce the work of administrative agencies. Second, AI helps to improve the efficiency and scale of routine activities. Finally, AI helps to optimize logistics supply chains. But even in these tasks, the intervention of human judgment is the basis and scale of automated decision-making.
Manual decision-making process: AI cannot perform tasks characterized by limited, biased data and ambiguous, controversial judgments, which must be completed by human decision-making. For military strategy and command tasks, the “fog” in the environment and the “friction” in the organization all require human “ingenuity” to solve. Whenever the “fog” and “friction” are the greatest and human “genius” is most needed, the role of AI becomes weak.
Decision-making automation process: Premature automation mainly refers to the intervention of AI when the conditions are not mature. Relying on AI is particularly dangerous when the data quality is low, but the machine has a clear goal and is authorized to act. The risk is greatest when the killing action is authorized. In addition, the data may be biased, and the machine may not understand human behavior well. The risks of premature automation are extreme in the military field (for example, friendly fire and civilian casualties). AI weapons may inadvertently target innocent civilians or friendly forces, or provoke hostile retaliation. As a result, AI often kills without regard for the consequences.
Human-machine cooperation process: Human-machine cooperation refers to the need for the joint cooperation of humans and machines in the processing of large amounts of information. In fact, many judgment tasks are difficult, and human intervention is necessary to obtain high-quality data. In practice, intelligence analysts have an instinct to deal with deceptive targets and ambiguous data, and it is difficult for artificial intelligence to learn this instinct-based ability. Applying artificial intelligence to the judgment of such problems is a difficult and challenging practice. However, in human-machine cooperation, artificial intelligence is more about solving complex and large data and analyzing complex problems under human guidance. However, whether it is high-quality data analysis or the final decision, the dominant force is still people.
The above lists the role of artificial intelligence in four different decision-making modes in the current military decision-making mechanism. Although the author did not say it explicitly, we can feel that these four processes either require human participation or the role of artificial intelligence is limited; in this overall process, artificial intelligence is also showing a weakening trend. These four processes can be reinterpreted as: artificial intelligence dominates the automatic decision-making process, artificial intelligence decision-making is limited in the manual decision-making process, decision-making is prematurely automated in the decision-making automation process, and human experience is difficult to replace in the human-machine cooperation process. In the first process, it is obvious that artificial intelligence can demonstrate its advantages in routine routine work, big data repetitive tasks, and programmed procedural activities. However, even in such activities, the scale and basis of human judgment are still the key to the realization of artificial intelligence. In the second process, it is mainly those cases where the data is small, the attitude is strong, the subjectivity is prominent, and the judgment is very easy to be ambiguous. Due to insufficient data, machine learning is difficult to complete, and each case may have specific changes, and it is impossible to form an overall judgment scale. In such a situation, artificial intelligence is often difficult to act. Humans’ unique values, worldviews, outlooks on life, moral emotions, personal spiritual realms, and personal work experiences often lead to very reasonable judgments on decisions like this, which is difficult for artificial intelligence to accomplish for the time being. Although there are still many experiments in this area, the ability of humans to comprehensively call on personal comprehensive knowledge, emotions, and value judgments in decision-making is significantly better than that of artificial intelligence. In the third process, the decision-making automation process has the advantages of huge data volume, fast data processing response, real-time data analysis results, and a reader-friendly interface. Therefore, for many problems, people are particularly inclined to collect relevant data from the beginning and use artificial intelligence to conduct data learning and analysis. However, since the data may have just begun to appear, or the data is easy to be manipulated or arranged, the actual data obtained is often only the front end of the actual data. Therefore, whether it is deep learning with artificial intelligence or data analysis with artificial intelligence, there will be premature automated analysis, and the trained artificial intelligence or the results of the analysis cannot fully identify the issues of concern. In fact, when we conduct research on any problem, it is difficult to guarantee that the data we obtain in a certain aspect represents all the data of the problem we are concerned about. Although the external data looks huge, this data may only be extremely biased or extremely local, extremely early or even immature data about the relevant things. The artificial intelligence based on this, whether it is training or calculation, the result is premature calculation, prematurely representing all the problem data information. And artificial intelligence itself, due to its high dependence on data, is difficult to escape the pre-determination of the data itself. Therefore, in the context of war, if the data of artificial intelligence is often interfered with, destroyed, deceived, manipulated and designed by relevant parties, then the decision-making judgments made by artificial intelligence are often unreliable, even very dangerous or tragic. Therefore, the outcome of leaving the war completely to artificial intelligence must be terrible: either the war has unlimited intensity, or there will be inhumane killings. After all, it is difficult for artificial intelligence to make rational value judgments and humane emotional decisions. In the fourth process, the author highly emphasizes that in human-machine cooperation, human judgment can produce a high level of judgment in deceptive, slightly different, ambiguous, unclear data and diversified data. This is an instinct generated by professional experience. Although artificial intelligence can obtain some amazing conclusions in the study of big data, such analysis standards and strategies can never escape human design and are constantly adjusted under human intervention. Of course, we must also point out that artificial intelligence’s values, moral sense, humanity and emotions cannot surpass humans in any way. Although it can have super knowledge content, logic and computing power, at present and for a long time in the future, considering the auxiliary data processing status of artificial intelligence in human-machine cooperation, even if artificial intelligence reaches human sensitivity, complexity, sharpness, consciousness and intuition, we will still give the complex and important final decision-making power to humans themselves.
In response to the above situation, the author discussed and reflected on the role of artificial intelligence in war and came to the following conclusions: First, the artificial intelligence data and judgments used by military organizations rely on human intervention; second, opponents in war have the motivation to complicate the data and judgments that artificial intelligence relies on; third, it is too early for artificial intelligence machines to replace human soldiers; finally, the unintended consequences and controversies brought about by artificial intelligence-driven wars are becoming increasingly prominent. For this reason, the author emphasizes that it is too early to assume that artificial intelligence will replace humans in war or any other competitive activities. Whether from the environment and conditions of the war itself, the process of war decision-making, the deep learning and computing of artificial intelligence in war, and the performance of artificial intelligence in the execution of military tasks, there is every reason to believe that even in future wars dominated by artificial intelligence, the role of humans will become increasingly important.
Here, the author puts forward a view that is very different from the current mainstream view: military artificial intelligence will not replace human dominance in war, but will instead highlight the prominent position and role of artificial intelligence in future wars. The author’s view should be worthy of deep thinking by artificial intelligence researchers, especially military artificial intelligence researchers. The author analyzes from many aspects why artificial intelligence cannot be independent of humans, act alone, and take on major tasks in future wars: the diverse context of war brings insurmountable challenges to artificial intelligence; the prediction and judgment of artificial intelligence in war cannot be reliable; artificial intelligence has limited ability to participate in military decision-making and cannot completely replace human participation and decision-making. In particular, it emphasizes the difficulty of grasping war itself, the unpredictability of multiple factors, the elusiveness and deliberate design and deception of all parties involved, the complexity, variability, deception, uncontrollability and difficulty in ensuring the authenticity of the war data obtained, and the vulnerability of artificial intelligence in prediction and judgment: the problems solved by artificial intelligence, the basis for solving them, the process of solving them, the procedures for solving them and the models for solving them are all affected by human factors, as well as the limited ability of artificial intelligence to participate in military decision-making. These three aspects show that artificial intelligence still faces many challenges in war and give us important inspiration: it is too early for artificial intelligence to dominate the future battlefield and become a truly independent warrior and war commander in future wars. Only humans are the masters and rulers of war. Due to the high degree of dominance of humans in the design of artificial intelligence, we hope that the day when artificial intelligence dominates war will never come. As humans, we expect that when artificial intelligence is galloping on the track of war, the developers of artificial intelligence should also always take ethical emotions and international law, the law of war and humanitarianism as the bottom line. This is the basic guarantee for peaceful development, harmonious development and harmonious development on the earth, and the pursuit of beauty, peace and happiness.
At present, we are paying close attention to the rapid development of artificial intelligence. In particular, the development of ChatGPT, which can handle all kinds of challenges in daily chatting, knowledge search, question answering, problem solving, programming, business management, project planning, language translation, paper writing, and literary creation, has indeed sounded the alarm for many positions that undertake deep mental work. However, no matter how artificial intelligence develops, no matter how subversive artificial intelligence like ChatGPT develops in the military field, humans are the leaders of artificial intelligence and the masters of war, and only humans can ensure the humanity, legitimacy, and effectiveness of war. I hope that the development of artificial intelligence can eliminate war.
現代國語:
摘要:本文評論了《國際安全》期刊上發表的《預見與判斷:為什麼人工智慧增強了人在未來戰爭中的重要性》一文,探討了人工智慧在戰爭戰略決策過程中所面臨的脈絡挑戰問題,以及戰爭環境下人工智慧參與預測與判斷的難度與不可控性,分析了軍事決策中人工智慧常見的決策過程及其特點,指出其中人工因素所扮演的重要角色。
近年來,人工智慧發展迅猛,被廣泛應用於商業、物流、通訊、交通、教育、傳播、翻譯等眾多領域,軍事領域也對其高度重視。大量研究和實踐表明,人工智慧大體可以取代人類在眾多崗位上的工作,因此,用人工智慧進行軍事行動並主導未來戰爭中的所有行動成為人工智慧在軍事領域的目標。未來戰爭,實質是人工智慧的戰爭。高德法伯與喬恩R.林賽在《預見與判斷:為什麼人工智慧增強了人在未來戰爭中的重要性》(Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War)一文中指出,未來戰爭中,人工智慧不可能取代人類,人工智慧不但沒有弱化人類的作用,相反也增強了人類在戰爭中的重要性。作者認為,純粹機器支援下的人工智慧解決不了當下和未來戰爭中的問題,主要是數據的品質問題以及判斷的困難性,加上對手的掩蓋、欺騙和乾擾,純粹機器支持下的人工智慧在未來戰爭中的作用將大打折扣。兩位作者主要從戰略脈絡、戰爭中的人工智慧、人工智慧在軍事決策中的表現以及軍事人工智慧戰略意義的討論與反思四個主要方面論述了人工智慧在當下及未來戰爭中依然無法取代純人工,相反,人類的角色在未來戰爭中依舊重要。其分析過程和主要觀點如下所示。為了便於對相關觀點直接做出評價,我們也一併在各方觀點之後給出了相應的評論。
軍事組織決策的戰略脈絡為人工智慧帶來了巨大挑戰
作者指出,軍事組織的決策會受到多方面的影響,整體說來,可能會表現為如下情況:(1)政治脈絡:政治脈絡主要表現為戰略環境與設施條件與心理偏好;(2)技術脈絡:機器學習的快速推進可以完成包括影像辨識、導航等在內的更精準、複雜、便捷以及更大數量上的預報;(3)決策過程:本過程主要涉及目標、價值、環境的客觀事實以及由此抽取的推理,也就是一個判斷、數據以及預測的過程;(4)人機分工:人工智慧的運用都是數據品質和判斷困難性所形成的函數,數據的品質高低、判斷的明確或困難決定了人機在決策上的相對優勢。
應該說,作者此處抓住了當前人工智慧參與軍事決策、擔任軍事具體角色、完成各種軍事任務、實現戰略戰役意圖過程中面臨的主要宏觀語境因素。政治脈絡往往是人工智慧最難掌握的條件,國際政治與國內政治,特別是國際間外交關係不穩定、國際政治的風雲突變、國內政治的穩定與突變性、國際地理和自然環境變化的不可預測性、國際與國內人員的心理變化等,是人工智慧難以掌握的。在技術方面,儘管人工智慧快速發展,但是,其無法脫離對資料的高度依賴性,這使得技術發展等同於物理學中的基本事實,即物體的移動速度再快也無法超越光速限制。決策過程,是人工智慧參與軍事決策影響未來戰爭的最重要的方面,也是戰爭背景下軍事指揮最為複雜的過程。但是,目前還沒有哪一個國家的軍隊、哪一個軍隊的指揮官能夠如此自信地說,人工智慧可以將決策的所有環節做到像人一樣有獨特的理性。面對龐大的數據,人工智慧的最大優勢是計算,但是,人類的先決條件是,有些數據不需要計算,憑直覺便能得出結論,更何況決策指揮往往體現指揮員更為高超的智慧與藝術。人機分工的脈絡其實讓我們愈發體認到,未來將有更多數據運用到戰爭決策中,人類可以將某些事務的決策權交給人工智慧,必要的決策仍要由人類來做。人機分工實際走向的階段,是人機的和諧分工與人機協同,特別是人類對戰爭的理性、人性、道德與倫理的重視。
戰爭中人工智慧在預測與判斷上的不可靠性
(1)戰略環境的不可控制資料難免影響預測:這可能表現在資料本身以及資料的取得與使用。資料方面較突出的表現為:資料造假、資料受限、資料受控、資料無效、無法分析等。在資料的源頭和資料分析中的主要表現為:資料來源眾多,難以預料;資料分析受技術限制;資料範圍隨網路發展不斷擴大,稀釋有效資料;網路系統和軟體易受多方幹擾;駭客以及多方的襲擾;多種技巧的衝突。
(2)軍事管理的判斷無法脫離人工參與:人工智慧在參與軍事管理過程中面臨眾多考驗。第一,軍事管理的判斷是個主觀性極強的問題。第二,運用機器學習來完成這個計算過程也不得不受人為判斷影響。第三, AI所使用的函數目標明確,各相關方為共同目標所牽引達成一致,發揮部隊領導指揮力。軍隊的指揮往往要面臨不同軍兵種、分支機構、單位人員,各自的技戰術、能力以及認知等都會有差異,讓人工智慧來解決這些集體行動問題時,難免會出現巨大的爭議,往往會使得問題變得更糟。
在這部分,作者指出了人工智慧參與軍事指揮中不可不面對,而且至少當下無法解決的兩個致命弱點:一個是數據的可靠性難以保證,一個是人工參與問題。關於數據的可靠性,在戰爭過程中,數據往往存在大量真假難辨的情況。再加上數據的受控性,作為對手一方以及第三方,可能有意控制某方面的數據,提供的數據也做了特殊內容以及邏輯關係的安排,甚至還有可能將數據做有意歪曲以及提供無理性的分散數據,使得數據分析結果毫無關聯性,也無法得出有效結論,從而喪失判斷能力。人類在未來很長一段時間內不會解決人工智慧判斷過程中必須有人工參與此問題。當下的人工智慧都是由人類設計出來的,儘管可以透過大量資料進行訓練以及優化,但是,當下人類還不允許人工智慧脫離人類事先的規定和約束,完全由人工智慧來決定自身的設計與優化和升級。考慮到軍事決策過程充滿了變數,不可能將一個軍事決策過程完全交給人工智慧來完成。人工智慧能夠完成的,就是自動化的傳遞數據以及大量的數據分析並提供結果。如果說一般的管理決策可以交給人工智慧來完成,那麼真正的關鍵決策,還是要交給人工來實現。實際上,考慮到軍事管理的決策,特別是戰爭環境下更為複雜、更具有挑戰性、更為受控的決策與過程,人工智慧要想完美體現指揮官的個人決策魅力和意圖,要想完全實現軍隊集體一體化行動以及多樣化軍兵種的個人化指揮,還有很長的路要走。人類戰爭條件下,每一個參戰方,特別是作戰者末端和高級指揮方對戰爭的執行有著很多的變量,比如,風雨冰雪、江河湖海、戰鬥意志、道路狀況、運輸能力、生產運行、材料補給等方面的變化往往會導致突發狀況。因此,實際的戰場往往變數大於設計。在軍事管理與戰場決策的眾多判斷中,即便是在未來智慧化作戰條件下,人工的參與將依舊處於主導地位。
人工智慧在軍事決策任務機制中參與受限
文章指出,人工智慧在軍事決策任務機制中體現了四種決策過程,也體現了相應的四種決策特點,其主要表現如下。
自動決策過程:人工智慧效能的最佳案例就是「自動決策」。首先,它可以減少行政機構的工作。其次,人工智慧有助於提高常規活動的效率和規模。最後,人工智慧有利於優化物流供應鏈。但即便是在這些任務中,人的判斷的介入才是自動決策提供決策的依據和判斷的尺度。
人工決策過程:人工智慧無法執行以有限、有偏見的數據和模棱兩可、有爭議的判斷為特徵的任務,這必須要人工決策來完成。對於軍事戰略和指揮任務來說,環境中的「迷霧」、組織中的「摩擦」等都需要人類的「聰明才智」來解決。每當「迷霧」和「摩擦」最大,最需要人類「天才」的時候,人工智慧的作用就變得弱小了。
決策自動化過程:過早的自動化主要是指在條件不成熟的情況下進行人工智慧的介入。在資料品質低但機器有明確目標並獲得授權採取行動的情況下,依賴人工智慧尤其危險。當授權採取殺戮行動時,風險最大。另外,數據也可能有偏差,而且機器也不能很好地理解人類的行為。過早自動化的風險在軍事領域是極端的(例如,誤傷和平民傷亡)。人工智慧武器可能無意中以無辜平民或友軍為目標,或引發敵對報復。因此,AI 往往會不顧及後果地殺人。
人機合作過程:人機合作指的是在大量資訊處理中需要人工和機器的共同協作。實際上,在許多判斷任務中困難重重,要獲得高品質的數據必須介入人工。在實踐中,情報分析人員處理欺騙性目標和模糊資料有著一種本能,人工智慧難以學到這種基於本能的能力。將人工智慧應用到這類問題的判斷中是一項困難和挑戰性極大的實踐。但是,人工智慧在人機合作中更多的還是在人工指導下解決複雜、龐大的數據以及分析複雜問題。不過,無論是高品質的數據分析,還是最後的決策,主導力量仍然是人。
以上羅列了當前人工智慧在參與軍事決策機制過程中,四種不同決策模式情況下人工智慧所扮演的角色。儘管作者沒有明說,但是我們能夠感覺到,這四個過程要不是需要人工的參與,就是人工智慧的作用受限;在這個整體過程中,人工智慧還隱約呈現出弱化的趨勢。這四個過程可以重新解讀為:自動決策過程中人工智慧占主導地位,人工決策過程中人工智慧決策受限,決策自動化過程中決策過早自動化以及人機合作過程中人工經驗難以取代。在第一個過程中,顯然人工智慧能夠體現自身在常規慣例性工作、大數據重複性任務、程式化程序性活動中的優勢,但是,即便是在這類活動中,人的判斷尺度和依據依舊是人工智慧得以實現的關鍵。在第二個過程中,主要是那些數據偏小、態度性強、主觀性突出、判斷極易出現模棱兩可情況,由於數據量不足,機器學習難以完成,而且每一個個案可能都有具體變化,無法形成總體的判斷尺度,在這樣的情況下,人工智慧往往難以作為。人類獨有的價值觀、世界觀、人生觀、道德情感、個人精神境界以及個人工作經驗,往往會對類似這樣的決策做出非常合理的判斷,這個是人工智慧一時難以完成的,儘管這方面的實驗依舊很多,但是人類決策中綜合調用個人綜合知識以及情感與價值判斷的能力明顯優於人工智慧。在第三個過程中,決策自動化過程由於具有資料量龐大、處理資料反應快、分析資料結果即時化、讀者介面親近友善等優勢,因此,對於許多問題來說,人們特別傾向於一開始就將相關數據集合起來,並利用人工智慧進行數據學習和分析,但由於數據可能剛開始呈現,或者數據易被操控或者安排,實際獲得數據往往只是實際數據的前端部分,因此,無論是用人工智能進行深度學習還是用人工智慧進行資料的分析,都會出現過早自動化分析的情況,所訓練的人工智慧或說分析的結果都無法全面標識所關心的問題。而實際上,我們在進行任何問題研究時,很難保證我們獲取的某個方面的數據代表了所關心問題的全部數據,儘管外部數據看上去很龐大,但是這個數據很可能只是有關事物的極為偏態或極為局部、極為初期乃至不成熟的資料。在此基礎上的人工智慧,無論是訓練和計算,其結果都是過早計算,過早代表了問題資料資訊的全部。而人工智慧自身,由於對於資料的高度依賴性,很難逃離資料本身的先設決定。因此,在戰爭背景下,如果人工智慧的數據經常受到有關方面的干擾、破壞、欺騙、操控與設計,那麼,人工智慧得出的決策判斷往往是不可信賴,甚至是非常危險或可悲的。因此,完全把戰爭交給人工智慧的結局肯定是可怕的:要么是戰爭出現了無限制的烈度,要么出現慘無人道的殺戮,畢竟人工智慧很難做到人類理性的價值判斷以及人道情感決策。在第四個過程中,作者高度強調了人機合作中,人工的判斷能夠在欺騙性、微小差別、模棱兩可、模糊不清的數據以及多樣化數據中產生一種高水平的判斷,這是一種職業經驗產生的本能;儘管人工智慧能在大數據的學習中獲取某些讓人驚嘆的結論,但是這樣的分析標準和策略,始終逃脫不過人工的設計,也始終在人工的干預下不斷調整。當然,我們也要指出的是,人工智慧的價值觀、道德感、人性和情感,無論如何是超越不了人類的,儘管其可以具備超強的知識含量、邏輯性和計算能力,但是在目前和未來相當長一段時間,考慮到人工智慧在人機合作中的輔助處理資料地位,即便人工智慧達到人類的敏感、複雜、敏銳、自覺與直覺,我們仍會將複雜而重要的最後決策權交給人類本身。
針對以上情況,作者對人工智慧在戰爭中的作用做了一番討論和反思,得出如下結論:首先,軍事組織使用的人工智慧數據和判斷都依賴人工的干預;其次,戰爭中的對手有動機使人工智慧依賴的數據和判斷複雜化;再一次,現在人工智慧機器取代人類戰士所帶來的優勢還為時過早;最後,人工智慧所驅動的戰爭帶來的不可意想的後果和爭議日益突出。為此,作者強調,現在就認為人工智慧將在戰爭或任何其他競爭活動中取代人類還為時過早。無論從戰爭本身的環境和條件,戰爭決策的過程,戰爭的人工智慧深度學習與運算,以及人工智慧參與軍事任務執行的表現來看,有充分的理由相信,即便是未來在由人工智慧主導的戰爭中,人類的角色也會愈加重要。
此處,作者提出了與當下主流觀點很是相左的觀點:軍事人工智慧不會在戰爭中取代人類的主導,相反還會凸顯人工在未來戰爭中的突出地位與作用。作者的觀點應該值得人工智慧研究者,特別是軍事人工智慧研究者的深度思考。作者從多方面分析了人工智慧無法做到在未來戰爭中獨當一面、獨立人類、獨行其道、獨當大任:戰爭的多樣化語境為人工智慧帶來不可逾越的挑戰;戰爭中人工智慧的預測與判斷無法做到可靠;人工智慧在軍事決策中參與能力有限、無法完全取代人類的參與和決策。特別是強調了戰爭本身的難以捉摸性、多方因素的不可預測性、各參與者的難以捉摸和刻意設計與欺騙性,所獲得的戰爭數據的複雜性、多變性、欺騙性、不可控制性、難以確保真實性,人工智慧在預測和判斷中的脆弱性:人工智慧所解決的問題、解決的依據、解決問題的過程、解決的程序以及解決的模型都受人工因素的影響,以及人工智慧在軍事決策中參與能力的受限三大面向,向人們展示了戰爭中人工智慧還面臨諸多挑戰,給了我們重要的啟示:人工智慧要主宰未來戰場,成為未來戰爭中真正獨立於人類之外的戰士和戰爭指揮者,還為時過早。唯有人類才是戰爭的主人和主宰者。由於人類對人工智慧設計的高度主宰性,我們希望人工智慧主宰戰爭這一天永遠不會到來。當人類的我們期望人工智慧在戰爭的賽道上疾馳時,人工智慧的開發者也要把倫理情感和國際法、戰爭法、人道主義始終作為底線,這是在地球上和平發展、和諧發展、和諧發展,追求美好、追求和平、追求幸福的基本保證。
當前,我們對人工智慧的快速發展高度關注。特別是ChatGPT的發展,它在日常聊天、知識搜尋、問題回應、難題解題、編寫程式、經營管理、專案規劃、語言翻譯、論文撰寫、文學創作等方面能夠接受百般刁難,確實已向承擔深度腦力工作的眾多崗位拉響了警報。但是,無論人工智慧如何發展,無論類似ChatGPT這樣具有顛覆性的人工智慧在軍事領域怎樣發展,人類才是人工智慧的主導者和戰爭中的主宰者,也只有人類才能確保戰爭的人道性、合法性和有效性。但願人工智慧的發展能夠消滅戰爭。
原文責任編輯:舒建軍 馬氍鴻
(本文註釋內容略
中國原創軍事資源:https://www.cssn.cn/dkzgxp/zgxp_gjshkxzzzwb/gjshkxzz202301/202308/t20230807_5677376.shtml