What strategic risks will military artificial intelligence bring to the game between China and the United States?


軍事人工智慧將為中美博弈帶來哪些戰略風險?

現代英語:

2023-10-24 10:21:32Source: Military High-Tech Online
In July 2023, the Center for a New American Security (CNAS) released a report titled US-China Competition and Military AI: US-China Competition and Military AI, which explores how the United States can effectively manage a series of strategic risks caused by the militarization of artificial intelligence in Sino-US relations against the backdrop of intensified Sino-US competition and rapid development of artificial intelligence technology. It also conducts an in-depth analysis of the possible paths by which military artificial intelligence can intensify the strategic risks between China and the United States, the options for the United States to manage the strategic risks of military artificial intelligence, and the related measures and recommendations. The report has great reference value, so the original content is compiled as follows for readers to learn and communicate.

Five ways military AI exacerbates strategic risks between China and the United States


How will emerging military artificial intelligence exacerbate strategic risks between China and the United States? The report discusses five possible impact paths and attempts to analyze and predict this issue.

1. Reshaping the Sino-US Military Balance
The report points out that in the process of militarized application of artificial intelligence, the imbalance of military strength between the competing parties caused by the unilateral improvement of military strength is most likely to aggravate the strategic risks between China and the United States. In the short term, military artificial intelligence will still be mainly used to improve the equipment maintenance, military logistics, personnel training and decision support of the military, and play an auxiliary and beneficial role, but these “behind-the-scenes” tasks, like front-line troops and weapons, constitute the basis of military strength. In addition, some emerging military artificial intelligence systems will also improve the combat capabilities of the troops. For example, the “loyal wingman” system based on human-machine collaboration can help improve the pilot’s mission, although this improvement may be incremental rather than revolutionary, and compared with fully autonomous unmanned aerial vehicles, the “loyal wingman” has limited effect on the transformation of the air combat paradigm. But there is no doubt that the military strength of the party that takes the lead in the military application of artificial intelligence will develop rapidly, and the rise and fall of this may push the military balance between China and the United States into a new stage, causing panic and concern for the lagging party.

2. Profound impact on information acquisition and strategic decision-making
The report believes that military artificial intelligence may increase strategic risks in the decision-making and information fields in three main ways: first, compressing decision-making time. If artificial intelligence can help one party make decisions faster, the other party may make hasty decisions in order to keep up with the opponent’s actions. This time pressure may exacerbate tensions and even create a new crisis; second, inducing decision makers to make wrong decisions. The decision-making process of the artificial intelligence system is in a technical “black box”. If there is a lack of clear understanding of the operating mechanism and defects of the artificial intelligence system, major strategic decisions may ultimately be based on the analysis of maliciously fabricated, distorted information or other low-quality information; third, influencing the opponent’s cognition through large-scale information activities, using artificial intelligence to generate massive amounts of directional text, audio, images or videos, undermining political stability, confusing high-level decision-making, creating alliance rifts, and triggering or aggravating political crises.

3. Autonomous weapon systems
First, if autonomous weapon systems provide greater military capabilities, decision makers may be more inclined to use force because they believe they have a higher chance of winning. Second, military operations using autonomous weapon systems have lower expected risks in terms of casualties, which may make leaders on both sides more likely to take action. Third, autonomous weapon technology will greatly enhance the combat capabilities of existing weapon systems, such as enabling hypersonic weapons to have the autonomy to maneuver and change their trajectories, making it more difficult for the enemy to intercept; or using machine learning to improve the predictive capabilities of air defense systems, making it possible to deploy anti-hypersonic and other high-end missile defense systems, and empowering users with greater military strength. Finally, autonomous drone swarms can theoretically provide new options for conventional counterattacks against an opponent’s nuclear arsenal. This potential capability may disrupt the strategic balance and increase the risk of strategic misjudgments.

4. Intelligence, Surveillance and Reconnaissance (ISR)
Military AI has already provided new tools for completing intelligence, surveillance, and reconnaissance missions, and may play an even greater role in the future. The combination of military AI and existing technologies can greatly improve the efficiency and cost-effectiveness of completing ISR missions. For example, AI can be combined with balloons or microsatellite constellations to conduct surveillance in “near-Earth space” or enable clustering of reconnaissance drones. AI systems can also process data from a variety of sensors on a large scale to track mobile missile systems on land and even submarines in the ocean. If these capabilities become a reality, they will provide military leaders with one-way transparency that can undermine strategic stability, thereby completely undermining the survivability of the opponent’s triad nuclear forces, and greatly increase the possibility and necessity of the weaker party to take a “preemptive” strike.


5. Command, Control, and Communications (C3)
AI can make cyber and electromagnetic warfare (EW) attacks more threatening and destructive. As big data inputs become increasingly important in AI training, both sides may intentionally degrade system performance by modifying or fine-tuning data sets to “poison” their opponents, which may lead to uncertainties or predictable failures in AI command, control, and communication systems that can be exploited by opponents. Another specific concern is that military AI may affect the C3 systems of nuclear weapons. Nuclear early warning systems will increasingly rely on AI technology to quickly analyze data from various sensors, but the system may misinterpret the data and generate false alarms, which may result in a brutal nuclear war that will hurt both sides.

II. Three options for the United States to manage strategic risks of military artificial intelligence

The report points out that the United States needs to take a series of measures to guard against the various potential dangers that military artificial intelligence brings to the bilateral security relations between China and the United States. These sources of risk may overlap in reality, and risk portfolio management aims to reduce a variety of different drivers of instability. The report discusses three options for managing and controlling the strategic risks of military artificial intelligence.

1. Restricting the development of China’s military AI technology
The report emphasizes that one way that artificial intelligence may exacerbate the risk of escalation is that it provides a large enough military advantage for one party to convince the country that it can wage war and achieve its goals at an acceptable cost. Therefore, the United States needs to try to prevent China’s artificial intelligence technology from developing and avoid the balance of military power from tilting in favor of China. At the same time, vigorously develop the United States’ artificial intelligence capabilities so that it always stays in a leading position and forms a technological advantage deterrence. At present, the United States focuses on preventing China’s military artificial intelligence development, mainly on advanced semiconductors, an important hardware that supports artificial intelligence systems, while restricting data, algorithms and talents in a targeted manner. For example, the U.S. government’s crackdown on TikTok (the overseas version of Douyin) is partly due to concerns that Americans’ data may be used to promote China’s artificial intelligence technology. The United States will also strictly regulate the source code of artificial intelligence algorithms used for geospatial analysis, and further restrict the output or disclosure of general algorithms such as facial recognition software and large language models. In terms of talent policy, the U.S. government will take further measures to prevent Chinese students from studying artificial intelligence technology in the United States.

2. Strengthen unilateral responsibility management and responsibly control military artificial intelligence
The report points out that minimizing civilian casualties should be a key design principle for military AI, and the best way to reduce the risks of military AI is to place the safety and reliability of the system on an equal footing with its lethality or efficiency, and to strictly implement testing and evaluation, verification and validation. To minimize uncertainty, China and the United States need to adopt safe design principles. The United States has formulated a series of unilateral declarative policies on the development and use of military AI. The U.S. Department of Defense’s “Artificial Intelligence Principles: Several Recommendations on the Ethics of the Department of Defense’s Artificial Intelligence Applications” requires the U.S. military to be “responsible, fair, traceable, reliable and controllable” when using AI. These core principles have been reiterated and supplemented in subsequent documents, such as the “Responsible Artificial Intelligence Practice Guide”, “Responsible Artificial Intelligence Strategy and Implementation Pathway”, and the “Autonomous Weapon System Directive” (DoD Directive 3000.09) issued in January 2023, which stipulate how to use AI and integrate it into the entire life cycle of defense projects.


3. Conduct bilateral and multilateral diplomacy to reduce strategic risks
Another way to prevent dangerous power imbalances, costly arms races, or miscalculations is to engage in bilateral and multilateral diplomacy. By negotiating arms control agreements or confidence-building measures, countries can try to set boundaries for the development or use of specific military technologies and then verify compliance. China and the United States should discuss limits on risky applications of AI, such as regulating its use in nuclear command and control or offensive cyber operations. The U.S. and Chinese governments can use bilateral and multilateral channels to exchange views on the impact of AI on national security. The U.S. and Chinese militaries can also engage in dialogues in which both sides raise questions about the military capabilities of AI and its uses, and communicate on rules of engagement, operational conflicts, and other topics to fully express their respective demands and expectations. In addition to official channels, the two countries can also use 1.5-track and 2-track dialogues to enhance understanding and consensus.

III. Nine recommendations for U.S. policymakers in the report
The emergence of military artificial intelligence may intensify competition between China and the United States and increase strategic risks. In order to effectively respond to this trend, the report believes that US policymakers should make efforts in nine aspects.

1. Restricting the development of artificial intelligence in relevant countries
The report recommends that U.S. policymakers continue to restrict the export of semiconductor production equipment and technology, advanced chips and other terminal products to China, hindering relevant countries from advancing military artificial intelligence. In addition, it is recommended that the United States find or develop creative tools to regulate artificial intelligence and its data, algorithms, and manpower. It is also recommended that the United States clearly develop military and dual-use artificial intelligence technologies, and continuously improve its policies to ensure effectiveness, while being vigilant against policies that restrict technological development.

2. Maintaining America’s Lead in Military AI
The report points out that the United States must act quickly to keep up with the development of China’s military artificial intelligence. This requires reforms in many areas, such as making “resilience” a key attribute of military systems. To succeed in this regard, not only the Department of Defense must make efforts, but also update immigration and education policies to attract, train and retain the best scientists and engineers from around the world.

3. Develop, promulgate, and implement responsible military AI norms or regulations
The United States should position itself as the leading global driver of military AI technology development, operational norms, and best practices. Key U.S. priorities in the near term should include further fleshing out the operational details of norms for conducting cyber attacks (including AI) on nuclear C3 infrastructure and fulfilling the commitments of the 2022 Nuclear Posture Review (NPR). In short, U.S. actions must match its rhetoric on the responsible use of military AI.


4. Proactively engage with allies, partners, and multilateral institutions
Regional and global partnerships play a vital role in achieving U.S. strategic goals. The United States should actively integrate consultations on relevant issues into its alliances and partnerships, expand the scope of discussion in the G7, NATO, AUKUS, and bilateral relations with Japan and South Korea, and actively promote and advocate the U.S. position in multilateral forums.

5. Consult with China on reducing risks and building trust related to military AI
The report suggests that the United States could try to expand negotiation channels with China on military artificial intelligence, such as developing a vocabulary of military artificial intelligence terms between China and the United States to ensure that both sides have common definitions of key concepts and reduce misunderstandings caused by language and cultural barriers. The two sides can also formulate risk levels based on artificial intelligence capabilities, such as defining artificial intelligence related to logistics support as a low risk level and autonomous nuclear weapon artificial intelligence as a high risk level. Further discuss the application areas of artificial intelligence and stipulate the use of artificial intelligence in lethal weapons. Even if the negotiations between the two sides do not achieve the expected results, exploring these issues will help enhance mutual understanding.

6. Continue to seek to establish a strategic risk and crisis management mechanism between China and the United States
Establishing effective diplomatic channels between China and the United States, especially maintaining contacts at the summit level, is crucial to reducing strategic risks and managing potential crises. The report recommends that the United States continue to explore the establishment of a strategic risk and crisis management mechanism between China and the United States. Even if it works intermittently, it is better than having no mechanism at all.

7. Make military AI a fundamental pillar of diplomacy with China related to nuclear weapons and strategic stability
Military artificial intelligence plays an increasingly important role in the balance between nuclear capabilities and other strategic capabilities. The report recommends that the United States initiate discussions on “strategic stability” at the level of the five permanent members of the United Nations Security Council and include military artificial intelligence in the negotiations.

8. Reducing strategic risks in other areas
The report believes that the United States should take measures as soon as possible to reduce strategic risks in other related areas and take unilateral actions with caution, such as postponing intercontinental ballistic missile tests when tensions escalate, especially when immediate testing is not required to ensure a safe, reliable and effective nuclear deterrence.

9. Strengthening Intelligence Collection, Analysis and Assessment
The direction of the development of military artificial intelligence depends not only on itself, but also on its interaction with nuclear weapons, military infrastructure, communication capabilities and other factors. Therefore, it is urgent to deepen the understanding of the overall strategic stability related to military artificial intelligence. The report recommends that the United States instruct relevant organizations to improve or, when necessary, establish multidisciplinary offices and expert backbones to pay close attention to China’s civilian and military artificial intelligence activities, monitor and analyze intelligence related to the issue, and provide recommendations.

IV. Conclusion
The military application of artificial intelligence may increase strategic risks, and countries need to work together to explore and regulate the development of artificial intelligence technology. In the face of the opportunities and challenges that artificial intelligence technology brings to human society, countries should use dialogue to dispel suspicion, replace confrontation with cooperation, and work together to promote good laws and good governance in the field of artificial intelligence, so that artificial intelligence technology can truly benefit mankind.

Text | Wen Lihao, Chen Lin (National University of Defense Technology)

現代國語:

2023年7月,新美國安全中心(CNAS)推出報告《中美關係與軍事人工智慧:美國如何在與中國的競爭中管控風險》(U.S.-China Competition and Military AI: U.S.-China Competition and Military AI),探討在中美博弈加劇和人工智慧技術迅速發展背景下,美國如何在中美關係中有效管控由人工智慧軍事化引發的一系列戰略風險,就軍事人工智慧加劇中美戰略風險的可能路徑、美國管控軍事人工智慧戰略風險的可選方案和相關措施建議展開了深入分析。報告具有較大參考價值,故將原文內容編譯如下,供讀者學習交流。

圖1:原報告封面
一、軍事人工智慧加劇中美間戰略風險的五條路徑
新興軍事人工智慧究竟會以何種方式加劇中美間的戰略風險?報告討論了五種可能的影響路徑,試圖對此問題進行分析和預測。
(一)重塑中美軍事平衡
報告指出,在人工智慧軍事化應用過程中,由於軍事實力單方面提高而造成的競爭雙方軍事實力失衡最有可能加劇中美戰略風險軍事人工智慧短期內仍將主要用於改善軍隊的裝備維護、軍事後勤、人員培訓和決策支援等過程,發揮輔助性增益性作用,但這些「幕後」任務與前線部隊和武器一樣,構成了軍事實力的基礎。此外,一些新興軍事人工智慧系統也將提高部隊的作戰能力,例如基於人機協同的「忠誠僚機」系統能夠幫助提高飛行員的任務度,儘管這種改進可能是漸進式而非革命性的,且相比完全自主的無人駕駛飛行器,「忠誠僚機」對空戰範式的變革作用有限。但毫無疑問的是,率先進行人工智慧軍事應用的一方,其軍事實力將快速發展,此消彼長間可能推動中美軍事平衡進入新階段,引發落後方的恐慌和擔憂。
(二)深刻影響資訊取得與策略決策
報告認為,軍事人工智慧或將主要以三種方式增加決策和資訊領域產生的戰略風險:一是壓縮決策時間,如果人工智慧可以幫助一方更快決策,那麼另一方可能會為了跟上對手的行動而倉促決策,這種時間壓力可能會加劇緊張局勢甚至製造一場新的危機;二是誘導決策者做出錯誤決策,人工智慧系統的決策過程處於技術「黑箱」中,如果對人工智慧系統的運作機制和缺陷缺乏清晰認知,重大戰略決策最終可能會建立在對被惡意捏造、扭曲的信息或其他劣質信息的分析的基礎上;三是通過大規模信息活動影響對手認知,借助人工智能生成海量含有指向性的文本、音頻、圖像或視頻,破壞政治穩定、混淆高層決策、製造同盟痕痕,引發或加劇同盟痕痕,引發政治危機。

圖2:基於人工智慧的「深度偽造」技術已經能夠快速產生海量的偽造訊息
(三)自主武器系統
首先,如果自主武器系統提供了更強的軍事能力,決策者將可能更傾向於使用武力,因為他們相信獲勝的機會會更高。其次,使用自主武器系統的軍事行動在人員傷亡方面的預期風險較低,這可能會讓雙方領導人更有可能採取行動。再一次,自主武器技術將極大增強現有武器系統的作戰能力,例如使高超音波速武器具備機動變軌的自主性,令敵更難攔截;或藉助機器學習提高防空系統的預測能力,使反高超音波速和其他高端飛彈防禦系統的部署成為可能,為使用方賦能更強的軍事實力。最後,具備自主性的無人機群理論上可以為針對對手核武庫的常規反擊提供新的選擇,這種潛在能力將可能打破戰略平衡,加劇戰略誤判的風險。
(四)情報、監視與偵察(ISR)
軍事人工智慧已經為完成情報、監視和偵察任務提供了新的工具,並且在未來可能會發揮更大作用。軍事人工智慧與現有技術的結合,可以大幅提高完成ISR任務的效率和性價比。例如將人工智慧與氣球或微衛星星座結合,以在「近地空間」進行監視,或為偵察無人機賦能群集性。人工智慧系統還可以大規模處理來自各種感測器的數據,以追蹤陸地上的移動飛彈系統甚至大洋中的潛艇。如果這些能力成為現實,它們將為軍事實力領導者提供能夠破壞戰略穩定性的單向透明度,進而徹底損害對手三位一體核力量的生存能力,也能極大增加弱勢方採取「先發製人」打擊的可能性和必要性。

圖3:自主武器系統應該掌握「開火權」嗎?
(五)指揮、控制與通信(C3)
人工智慧可以使網路和電磁戰(EW)攻擊更具威脅性和破壞性。隨著大數據輸入在人工智慧訓練中變得越來越重要,雙方都可能會透過修改或微調資料集來故意降低系統性能進而達到「毒害」對手的目的,這可能導致人工智慧指揮、控制和通訊系統的不確定性或可預測故障,被對手利用。另一個具體擔憂是,軍事人工智慧可能會影響核武的C3系統。核子預警系統將越來越依賴人工智慧技術來快速分析來自各種感測器的數據,但該系統可能會錯誤解讀數據,產生誤報,其結果可能引發兩敗俱傷的殘酷核戰。
二、美國管控軍事人工智慧戰略風險的三種方案
報告指出,美國需要採取一系列措施來防範軍事人工智慧對中美雙邊安全關係帶來的各種潛在危險,這些風險來源在現實中可能重疊,風險組合管理旨在減少多種不同的不穩定驅動因素,報告在此討論了管控軍事人工智慧戰略風險的三種方案。
(一) 限制中國軍事人工智慧技術發展
報告強調,人工智慧可能加劇風險升級的一種途徑是它為一方提供足夠大的軍事優勢,使該國相信它可以以可接受的成本發動戰爭並實現其目標。因此,美國需要設法阻止中國人工智慧技術發展,避免軍事力量平衡向有利於中國的方向傾斜。同時,大力發展美國的人工智慧能力,使其始終處於領先地位,形成技術優勢威懾。目前,美國阻止中國軍事人工智慧發展的重點主要集中在支援人工智慧系統的重要硬體——先進半導體上,同時有針對性地從數據、演算法和人才方面加以限制。例如美國政府對TikTok(海外版抖音)的打壓,部分原因是擔心美國人的數據可能被用來推動中國人工智慧技術進步。美國也將對用於地理空間分析的人工智慧演算法原始碼進行嚴格監管,並進一步限制臉部辨識軟體、大型語言模型等通用演算法的輸出或揭露。在人才政策方面,美國政府會採取進一步措施,阻止中國學生在美國學習人工智慧技術。

圖4:美國藉口「國家安全」打壓TikTok
(二) 加強單邊責任管理,負責任管控軍事人工智慧
報告指出,最小化平民傷亡應作為軍事人工智慧的關鍵設計原則,降低軍事人工智慧風險的最佳方法是將系統的安全性和可靠性與其殺傷力或效率放在同等重要的位置,並嚴格執行測試和評估、驗證和確認。為了最大限度地減少不確定性,中國和美國需要採用安全的設計原則。美國就軍事人工智慧的開發和使用制定了一系列單方面的宣言性政策。美國國防部《人工智慧原則:國防部人工智慧應用倫理的若干建議》要求美軍在使用人工智慧時做到「負責、公平、可追溯、可靠和可控」。這些核心原則在後續發布的文件中得到了重申和補充,如《負責任的人工智慧實踐指南》、《負責任的人工智慧戰略和實施途徑》以及2023年1月發布的《自主武器系統指令》(DoD Directive 3000.09 ),這些文件規定瞭如何使用人工智慧並將其融入國防專案的整個生命週期。
(三)進行雙邊與多邊外交,降低戰略風險
防止危險的力量失衡、代價高昂的軍備競賽或誤判的另一種方式是進行雙邊和多邊外交。透過談判達成軍備控制協議或建立信任措施,各國可以嘗試為特定軍事技術的開發或使用設定界限,然後核查遵守情況。中國和美國應該討論對人工智慧風險應用的限制,例如規範其在核指揮與控製或進攻性網路行動中的使用。美國和中國政府可以利用雙邊和多邊管道,就人工智慧對國家安全的影響交換意見。中美兩軍也可以展開對話,雙方就人工智慧的軍事能力及其用途提出問題,並就交戰規則、行動衝突和其他主題進行溝通,充分錶達各自訴求和期望。除官方管道外,兩國還可利用1.5軌與2軌對話,增進理解與共識。
三、報告為美國決策層提供的九項措施建議
軍事人工智慧的出現可能會加劇中美競爭,增加戰略風險。為了有效因應這一趨勢,報告認為美國的政策制定者應該從9個面向進行努力。
(一)限制相關國家人工智慧的發展
報告建議美國政策制定者繼續限制半導體生產設備和技術、先進晶片等終端產品的對華出口,阻礙相關國家推動軍事人工智慧。此外,也建議美國尋找或開發監管人工智慧和其數據、演算法、人力的創意工具。明確發展人工智慧軍用和軍民兩用技術,並不斷改善其政策,確保有效性,同時警惕政策為技術發展帶來限制。
(二) 維持美國軍事人工智慧的領先地位
報告指出,美國必須迅速採取行動,跟上中國軍事人工智慧的發展速度。這需要在許多領域進行改革,例如,將「韌性」作為軍事系統的關鍵屬性。要想在這方面取得成功,不僅國防部要做出努力,還需要更新移民和教育政策,吸引、訓練和留住世界各地最優秀的科學家和工程師。
(三) 制定、頒布、實施負責任的軍事人工智慧規範或法規
美國應將自己定位為軍事人工智慧技術開發、操作規範制定和最佳實踐的全球主要推動者。美國近期的主要優先事項應包括進一步充實在核C3基礎設施上實施網路攻擊(包括人工智慧)規範的操作細節,並履行2022年《核態勢評估報告》(Nuclear Posture Review,NPR)的承諾。簡而言之,美國的行動必須與其在負責任地使用軍事人工智慧的言論相符。

圖5:美國自2018年起對華為展開全方位打壓
(四) 主動與盟友、夥伴以及多邊機構接觸
區域和全球夥伴關係在促成美國戰略目標完成方面發揮著至關重要的作用。美國應積極將相關議題的磋商納入其同盟和夥伴關係,擴大G7、北約、AUKUS及與日本和韓國雙邊關係的討論範圍,積極推進、倡導美國在多邊論壇中的立場。
(五)與中國就降低軍事人工智慧相關風險和建立信任進行磋商
報告建議,美國可以嘗試拓展與中國建立軍事人工智慧的談判管道,如開發中美軍事人工智慧術語詞彙表,保證雙方對關鍵概念有共同的定義,減少語言和文化障礙造成的誤解。雙方還可以基於人工智慧能力製定風險等級,例如將後勤保障相關的人工智慧確定為低風險等級,將自主核武人工智慧確定為高風險等級。進一步討論人工智慧應用領域,同時規定人工智慧在致命武器中的使用規範。即使雙方的談判不會達成預期結果,探討這些問題也有助於增進對彼此的理解。
(六) 持續尋求建立中美策略風險與危機管理機制
建立有效的中美外交管道,尤其是保持首腦層級的聯繫,對降低策略風險、管理潛在的危機至關重要。報告建議美國要持續探索建立中美戰略風險和危機管理機制,即使是間歇性發揮作用,也勝過沒有機制。
(七) 使軍事人工智慧成為與核武和戰略穩定相關的對華外交基本支柱
軍事人工智慧在核子能力與其他戰略能力的平衡方面發揮著越來越重要的作用。報告建議,由美國在聯合國五個常任理事國層級發起推動「戰略穩定」的討論,並將軍事人工智慧納入談判。
(八)降低其他領域的策略風險
報告認為,美國應盡快採取措施,減低其他相關領域的戰略風險,謹慎採取單邊行動。例如在局勢緊張加劇時推遲洲際彈道飛彈試射,特別是在不需要立即進行試驗來確保安全、可靠和有效的核威懾的情況下。
(九)強化情報蒐集、分析與評估
軍事人工智慧的發展走向不僅取決於它本身,還取決於它與核武、軍事基礎設施、通訊能力等因素之間的相互作用,因此迫切需要加深對軍事人工智慧相關的整體戰略穩定性的理解。報告建議美國責成相關組織完善或在需要時建立多學科辦公室和專家骨幹,密切關注中國的民用及軍事人工智慧活動,監測、分析與該問題相關的情報,並給予建議。
四、結 語
人工智慧軍事應用可能加劇戰略風險,需要各國攜手對人工智慧技術發展加以探索和規制。面對人工智慧技術為人類社會帶來的機會與挑戰,各國應以對話打消猜忌,以合作取代對立,並攜手推動人工智慧領域依良法、促善治,使人工智慧技術真正造福人類。

文 | 文力浩、陳琳(國防科技大學)

中國原創軍事資源:http://www.81it.com/2023/1024/14640888.html

Leave a Reply

Your email address will not be published. Required fields are marked *