中國軍事智慧戰爭全面回顧:智慧作戰指揮
現代英語:
Liu Kui, Qin Fangfei
Tips
● Modern artificial intelligence is essentially like a “brain in a vat”. If it is allowed to carry out combat command, it will always face the problem of subjectivity loss, that is, “self” loss. This makes artificial intelligence have natural and fundamental defects. It must be based on human subjectivity and improve the effectiveness and level of combat command through human-machine hybrid.
● In intelligent combat command, the commander is mainly responsible for planning what to do and how to do it, while the intelligent model is responsible for planning how to do it specifically.
“Brain in a vat” is a famous scientific hypothesis. It means that if a person’s brain is taken out and placed in a nutrient solution, the nerve endings are connected to a computer, and the computer simulates various sensory signals. At this time, can the “brain in a vat” realize that “I am a brain in a vat”? The answer is no, because as a closed system, when a person lacks real interactive experience with the outside world, he cannot jump out of himself, observe himself from outside himself, and form self-awareness. Modern artificial intelligence is essentially like a “brain in a vat”. If it is allowed to implement combat command, it will always face the problem of subject loss, that is, “self” loss. This makes artificial intelligence have natural and fundamental defects, and it must be based on human subjectivity and improve the effectiveness and level of combat command through human-machine hybrid.
Based on “free choice”, build a “man-planned” command model
On the battlefield, the commander can choose which target to attack, and can choose to attack from the front, from the flank, from the back, or from the air; he can isolate but not attack, surround but not attack, talk but not attack… This is human autonomy, and he can freely choose what to do and how to do it. But machines can’t do that. The combat plans they give can only be the plans implied in the intelligent model. As far as the specific plan given each time is concerned, it is also the most likely plan in the sense of probability statistics. This makes the plans generated by artificial intelligence tend to be “templated”, which is equivalent to a “replica machine”. It gives similar answers to the same questions and similar combat plans for the same combat scenarios.
Compared with artificial intelligence, different commanders design completely different combat plans for the same combat scenario; the same commander designs different combat plans when facing similar combat scenarios at different times. “Attack when the enemy is unprepared and take them by surprise”, the most effective plan may seem to be the most dangerous and impossible plan. For commanders, facing combat scenarios, there are infinite possibilities in an instant, while for artificial intelligence, there is only the best-looking certainty in an instant, lacking creativity and strategy, and it is easy for the opponent to predict it. Therefore, in intelligent combat command, based on human autonomy, the commander is responsible for planning and calculation, innovating tactics and tactics, and designing basic strategies, and the machine is responsible for converting basic strategies into executable and operational combat plans, forming a “man-planned” command mode. More importantly, autonomy is the unique mark of human existence as human being. This power of free decision-making cannot and is not allowed to be transferred to machines, making people become vassals of machines.
Based on “self-criticism”, build a command model of “people against machine”
Human growth and progress are usually based on the real self, focus on the ideal self, and criticize the historical self in a negation-negation style. Artificial intelligence has no “self” and has lost its self-critical ability. This makes it only able to solve problems within the original cognitive framework. The combat ideas, combat principles, and tactics of the model are given when the training is completed. If you want to update and improve your knowledge and ideas, you must continuously train the model from the outside. Mapped to a specific combat scenario, the intelligent model can only provide the commander with a pre-given problem solution. It is impossible to dynamically adjust and update it continuously during a battle.
People with a self-critical spirit can jump out of the command decision-making thinking process and review, evaluate, and criticize the command decision. In the continuous self-criticism, the combat plan is constantly adjusted, and even the original plan is overturned to form a new plan. In the command organization group, other commanders may also express different opinions on the combat plan. The commander adjusts and improves the original plan on the basis of fully absorbing these opinions, and realizes the dynamic evolution of the combat plan. Therefore, combat command is essentially a dynamic process of continuous forward exploration, not a static process given in advance by the combat plan. When the machine generates a combat plan, the commander cannot accept it blindly without thinking, but should act as an “opponent” or “fault finder”, reflect on and criticize the combat plan, and raise objections. Based on the human’s objections, the machine assists the commander to continuously adjust and optimize the combat plan, forming a command mode of “human opposing and machine correcting”.
Based on “self-awareness and initiative”, we build a command model of “people lead and machines follow”
Comrade Mao Zedong once said that what we call “conscious initiative” is the characteristic that distinguishes humans from objects. Any complex practical activity to transform the world starts with a rough and abstract idea. To transform abstract concepts into concrete actions, it is necessary to overcome various risks and challenges, give full play to conscious initiative, and take the initiative to set goals, make suggestions, and think of ways. Artificial intelligence without conscious initiative, when people ask it questions, it only gives the answers implied in the model, without caring whether the answer can be used, targeted, or practical. In other words, when an abstract and empty question is raised, it gives an abstract and empty answer. This is also why the current popular large model unified operation mode is “people ask questions and machines answer”, rather than “machines ask questions”.
Relying on conscious initiative, even the most abstract and empty problems can be transformed step by step into specific action plans and specific action practices. Therefore, in intelligent combat command, the commander is mainly responsible for planning what to do and what ideas to follow, while the intelligent model is responsible for planning how to do it specifically. If the combat mission is too abstract and general, the commander should first break down the problem into details, and then the intelligent model should solve the detailed problem. Under the guidance of the commander, the problem is gradually solved in stages and fields, and the combat goal is finally achieved, forming a command mode of “people lead and machines follow”. It’s like writing a paper. First you make an outline and then you start writing. People are responsible for making the outline, and the specific writing is done by the machine. If the first-level outline is not specific enough, people can break it down into a second-level or even a third-level outline.
Based on “self-responsibility”, build a command model of “human decision-making and machine calculation”
Modern advanced ship-borne air defense and anti-missile systems usually have four operational modes: manual, semi-automatic, standard automatic, and special automatic. Once the special automatic mode is activated, the system will no longer require human authorization to launch missiles. However, this mode is rarely activated in actual combat or training. The reason is that humans, as the responsible subject, must be responsible for all their actions, while the behavior of machines is the absence of the responsible subject. When it comes to holding people accountable for major mistakes, machines cannot be held accountable. Therefore, life-and-death matters must not be decided by a machine without autonomous responsibility. Moreover, modern artificial intelligence is a “black box”. The intelligent behavior it exhibits is inexplicable, and the reasons for right and wrong are unknown, making it impossible for people to easily hand over important decision-making power to machines.
Because AI lacks “autonomous responsibility”, all problems in its eyes are “domesticated problems”, that is, the consequences of such problems have nothing to do with the respondent, and the success or failure of the problem solving is irrelevant to the respondent. Corresponding to this are “wild problems”, that is, the consequences of such problems are closely related to the respondent, and the respondent must be involved. Therefore, in the eyes of AI without self, there are no “wild problems”, all are “domesticated problems”, and it stays out of any problem. Therefore, in intelligent combat command, machines cannot replace commanders in making judgments and decisions. It can provide commanders with key knowledge, identify battlefield targets, organize battlefield intelligence, analyze battlefield conditions, predict battlefield situations, and even form combat plans, formulate combat plans, and draft combat orders. However, the plans, plans, and orders it gives can only be used as drafts and references. As for whether to adopt them and to what extent, it is up to the commander to decide. In short, both parties make decisions together, with artificial intelligence responsible for prediction and humans responsible for judgment, forming a command mode of “human decision-making and machine calculation”.
現代國語:
從「缸中之腦」看智慧化作戰指揮
■劉 奎 秦芳菲
要點提示
●現代人工智慧,本質上就如同“缸中之腦”,如果讓它實施作戰指揮,始終會面臨主體缺失即“自我”缺失的問題。這使得人工智慧存在天然的、根本的缺陷,必須基於人的主體性,透過人機混合來提升作戰指揮效能和水平
●智能化作戰指揮中,指揮員主要負責規劃做什麼、依什麼思路做,智能模型則負責規劃具體怎麼做
「缸中之腦」是一項著名科學假設。意思是,假如人的大腦被取出放在營養液中,神經末梢接上計算機,由計算機模擬出各種感知信號。這時候,「缸中之腦」能不能意識到「我是缸中之腦」?答案是不能,因為人作為一個封閉的系統,當與外界缺乏真實的互動體驗時,人是無法跳出自身、從自身之外觀察自身並形成自我意識的。而現代人工智慧,本質上就如同“缸中之腦”,如果讓它實施作戰指揮,始終會面臨主體缺失即“自我”缺失的問題。這使得人工智慧存在天然的、根本的缺陷,必須基於人的主體性,透過人機混合來提升作戰指揮效能和水準。
基於“自由選擇”,建構“人謀機劃”的指揮模式
戰場上,指揮員可以選擇打哪一個目標,可以選擇從正面打、從翼側打、從背後打、從空中打;可以隔而不打、圍而不打、談而不打……這就是人的自主性,可以自由選擇做什麼、怎麼做。但機器不行,它給出的作戰方案,只能是智慧模型中蘊含的方案。就每次給出的特定方案而言,也是機率統計意義上可能性最大的方案。這使得人工智慧生成的方案呈現“模板化”傾向,相當於一個“復刻機”,同樣的問題,它給出的是相似的回答,同樣的作戰場景,它給出的就是相似的作戰方案。
與人工智慧相比,同樣的作戰場景,不同的指揮員設計的作戰方案完全不同;同一指揮員在不同的時間面對相似的作戰場景,設計的作戰方案也不相同。 “攻其無備,出其不意”,最有效的方案很可能看上去是最危險、最不可能的方案。對於指揮員,面對作戰場景,一瞬間有無限可能,而對於人工智慧,一瞬間卻只有看上去最好的確定,缺乏創意、缺少謀略,很容易為對方所預料。所以,在智慧化作戰指揮中,要基於人的自主性,由指揮員負責籌謀算計、創新戰法打法、設計基本策略,由機器負責將基本策略轉化為可執行可操作的作戰方案,形成「人謀機劃」的指揮模式。更重要的是,自主性是人作為人而存在的獨特標志,這種自由作決定的權力不可能也不允許讓渡給機器,使人淪為機器的附庸。
基於“自我批判”,建構“人反機正”的指揮模式
人類的成長進步,通常是立足現實自我,著眼理想自我,對歷史自我進行否定之否定式的批判。人工智慧沒有“自我”,同時也喪失了自我批判能力。這使得它只能停留在原有認知框架內解決問題,模型擁有的作戰思想、作戰原則、戰法打法,是在訓練完成時所給予的。如果想獲得知識和想法的更新提升,就必須從外部對模型進行持續訓練。映射到特定作戰場景,智慧模型給指揮員提供的只能是事先給定的問題解決方案,要想在一次作戰中不斷地動態調整更新是做不到的。
具有自我批判精神的人類,可以跳脫指揮決策思考過程,對指揮決策進行審視、評價、批判。在持續地自我批判中不斷對作戰方案進行調整,甚至推翻原有方案,形成新的方案。在指揮機構群體中,其他指揮人員也可能對作戰方案提出不同意見,指揮員在充分吸納這些意見的基礎上,調整改進原有方案,實現作戰方案的動態進化。所以,作戰指揮本質上是一個不斷向前探索的動態過程,不是作戰方案事先給定的靜態過程。當機器生成作戰方案時,指揮員不能不加思考地盲目接受,而應充當“反對者”“找茬人”,對作戰方案展開反思批判,提出反對意見,機器根據人的反對意見,輔助指揮員不斷調整、優化作戰方案,形成「人反機正」的指揮模式。
基於“自覺能動”,建立“人引機隨”的指揮模式
毛澤東同志說過,我們名之曰“自覺的能動性”,是人之所以區別於物的特點。任何一項改造世界的複雜實踐活動,都是從粗糙的、抽象的想法開始的,要將抽象觀念轉化為具體行動,需要克服各種風險和挑戰,充分發揮自覺能動性,主動定目標、出主意、想辦法。沒有自覺能動性的人工智慧,人們向它提出問題,它給出的只是模型中蘊含的答案,而不會管這個答案能不能用、有沒有針對性、可不可以實際操作,即提出抽象、空洞的問題,它給出的就是抽象、空洞的回答。這也是為什麼時下流行的大模型統一的運行模式是“人問機答”,而不是“機器提出問題”。
依賴自覺能動性,再抽象、空洞的問題都能由人一步一步轉化為具體的行動方案、具體的行動實踐。因此,在智慧化作戰指揮中,指揮員主要負責規劃做什麼、依什麼思路做,智慧模型則負責規劃具體怎麼做。若作戰任務太過抽象籠統,應先由指揮員對問題進行分解細化,再由智慧模型對細化後的問題進行解算。在指揮引導下,分階段、分領域逐步解決問題,最終達成作戰目標,形成「人引機隨」的指揮模式。這就像寫一篇論文,先列出提綱,再進行寫作,列提綱由人負責,具體寫作由機器完成,如果感覺一級綱目不夠具體,可由人細化為二級乃至三級綱目。
基於“自主負責”,建立“人斷機算”的指揮模式
現代先進的艦載防空反導系統,通常有手動、半自動、標準自動、特殊自動四種作戰模式,一旦啟用特殊自動模式,系統發射導彈將不再需要人的授權幹預。但該模式無論在實戰還是在訓練中都很少啟用。究其原因,人作為責任主體要對自己的所有行為負責,而機器行為背後卻是責任主體的缺失,當要為重大失誤追責時,機器是無法負責的。所以,生死攸關的大事決不能讓一個沒有自主責任的機器決定。況且,現代人工智慧是一個“黑箱”,它所展現的智能行為具有不可解釋性,對與錯的原因無從知曉,讓人無法輕易將重大決定權完全交給機器。
由於人工智慧缺乏“自主責任”,會使它眼中的問題全是“馴化問題”,也就是該類問題產生的後果與回答者沒有關系,問題解決的成功也罷、失敗也罷,對回答者來說無所謂。與之相應的是“野生問題”,也就是該類問題產生的後果與回答者息息相關,回答者必須置身其中。所以,在缺失自我的人工智慧眼中沒有“野生問題”,都是“馴化問題”,它對任何問題都置身事外。因此,在智慧化作戰指揮中,機器不能取代指揮員做出判斷和決策。它可以為指揮員提供關鍵知識、識別戰場目標、整編戰場情報、分析戰場情況、預測戰場態勢,甚至可以形成作戰方案、制定作戰計劃、擬製作戰命令,但它給出的方案計劃命令,只能作為草稿和參考,至於採不採用、在多大程度上採用,還得指揮員說了算。簡單來說,就是雙方共同做出決策,人工智慧負責預測,人負責判斷,形成「人斷機算」的指揮模式。