中國軍事發展趨勢與人工智能武器化治理策略
現代英語:
The weaponization of artificial intelligence (AI) is an inevitable trend in the new round of military revolution. Recent local wars have further spurred relevant countries to advance their AI weaponization strategies in order to seize the high ground in future warfare. The potential risks of AI weaponization cannot be ignored. It may intensify the arms race and disrupt the strategic balance; empower operational processes and increase conflict risks; increase accountability and collateral damage; and lower the proliferation threshold, leading to misuse and abuse. To address this, it is necessary to strengthen international strategic communication to ensure consensus and cooperation among countries on the military applications of AI; promote dialogue and coordination in the development of laws and regulations to form a unified and standardized legal framework; strengthen ethical constraints on AI to ensure that technological development conforms to ethical standards; and actively participate in global security governance cooperation to jointly maintain peace and stability in the international community.
[Keywords] Artificial intelligence, military applications, security risks, security governance [Chinese Library Classification Number] F113 [Document Code] A
The weaponization of artificial intelligence (AI) refers to the application of AI-related technologies, platforms, and services to the military field, making them a crucial driving force for military operations and thereby enhancing their efficiency, precision, and autonomy. With the widespread application of AI technology in the military, major powers and military leaders have increased their strategic and resource investment, accelerating research and application. The frequent regional conflicts in recent years have further stimulated the battlefield application of AI, profoundly shaping the nature of warfare and the future direction of military transformation.
It cannot be ignored that artificial intelligence, as a rapidly developing technology, inherently carries potential risks due to its immature technology, inaccurate scenario matching, and incomplete supporting conditions. Furthermore, human misuse, abuse, or even malicious use can easily bring various risks and challenges to the military and even international security fields. To earnestly implement the global security initiatives proposed by General Secretary Xi Jinping, we must directly confront the global trend of weaponizing artificial intelligence, deeply analyze the potential security risks arising from the weaponization of AI, and consider scientifically feasible governance approaches and measures.
Current trend of weaponization of artificial intelligence
In recent years, the application of artificial intelligence in the military field is fundamentally reshaping the future form of warfare, changing future combat systems, and influencing the future direction of military transformation. Major military powers have regarded artificial intelligence as a disruptive key technology that will change the rules of future warfare, and have invested heavily in the research and development and application of AI weapons.
The weaponization of artificial intelligence is an inevitable trend in military transformation.
With the rapid development of science and technology, the necessity and urgency of military transformation are becoming increasingly prominent. Artificial intelligence, by simulating human thought processes, extends human mental and physical capabilities, enabling rapid information processing, analysis, and decision-making. It can also develop increasingly complex unmanned weapon system platforms, thereby providing unprecedented intelligent support for military operations.
First, it provides intelligent support for military intelligence reconnaissance and analysis. Traditional intelligence reconnaissance methods are constrained by multiple factors such as manpower and time, making it difficult to effectively cope with the demands of large-scale, high-speed, and highly complex intelligence processing. The introduction of artificial intelligence (AI) technology has brought innovation and breakthroughs to the field of intelligence reconnaissance. In military infrastructure, the application of AI technology can build intelligent monitoring systems, providing high-precision, real-time intelligence perception services. In the field of intelligence reconnaissance, AI technology has the ability to process multiple “information streams” in real time, thereby greatly improving analysis efficiency. ① By using technologies such as deep learning, it is also possible to “see through the phenomena to the essence,” uncovering the deep-seated connections and causal relationships within various fragmented intelligence information, rapidly transforming massive amounts of fragmented data into usable intelligence, thereby improving the quality and efficiency of intelligence analysis.
Secondly, it provides data support for combat command and decision-making. Artificial intelligence provides strong support for combat command and military decision-making in terms of battlefield situational awareness. Its advantage lies in its ability to perform key tasks such as data mining, data fusion, and predictive analysis. In informationized and intelligent warfare, the battlefield environment changes rapidly, and the amount of intelligence information is enormous, requiring rapid and accurate decision-making responses. Therefore, advanced computer systems have become important tools to assist commanders in managing intelligence data, assessing the enemy situation, proposing operational plans, and formulating plans and orders. For example, the US military’s ISTAR (Intelligence, Surveillance, Target Identification and Tracking) system, developed by Raytheon Technologies Corporation, encompasses intelligence gathering, surveillance, target identification, and tracking functions. It can aggregate data from diverse information sources such as satellites, ships, aircraft, and ground stations, and perform in-depth analysis and processing. This not only significantly improves the speed at which commanders acquire information but also provides data support through intelligent analysis systems, making decision-making faster, more efficient, and more accurate.
Third, it provides crucial support for unmanned combat systems. Unmanned combat systems are a new type of weapon system capable of independently completing military missions without direct human control. They primarily consist of intelligent unmanned combat platforms, intelligent munitions, and intelligent combat command and control systems, possessing significant autonomy and intelligence. As a technological equipment leading the transformation of future warfare, unmanned combat systems have become a crucial bargaining chip in inter-state military competition. This system achieves adaptability to different battlefield environments and operational spaces by utilizing key technologies such as autonomous navigation, target recognition, and path planning. With the help of advanced algorithms such as deep learning and reinforcement learning, unmanned combat systems can independently complete navigation tasks and achieve precise target strikes. The design philosophy of this system is “unmanned platform, manned system,” essentially an intelligent extension of manned combat systems. For example, the MQM-57 Falconer unmanned aerial vehicle developed by the U.S. Defense Advanced Research Projects Agency (DARPA) employs advanced artificial intelligence technology and possesses highly autonomous target recognition and tracking capabilities.
Fourth, it provides technical support for military logistics and equipment support. In the context of information warfare, the pace of war has accelerated, mobility has increased, and combat consumption has significantly risen. The traditional “overstocking” support model is no longer adequate to meet the rapidly changing needs of the modern battlefield. Therefore, higher demands are placed on combat troops to provide timely, location-appropriate, demand-based, and precise rapid and precise logistical support. Artificial intelligence, as a technology with spillover and cross-integration characteristics, is merging with cutting-edge technologies such as the Internet of Things, big data, and cloud computing. This has enabled AI knowledge, technology, and industry clusters to fully penetrate the military logistics field, significantly enhancing logistical equipment support capabilities.
Major countries are actively developing military applications of artificial intelligence.
To enhance their global competitiveness in the field of artificial intelligence, major powers such as the United States, Russia, and Japan are accelerating their strategic deployments for the military applications of AI. First, they are updating and adjusting their top-level strategic plans in the field of AI to provide clear guidance for future development. Second, in response to the needs of future warfare, they are accelerating the deep integration of AI technology with the military field, promoting the intelligent, autonomous, and unmanned development of equipment systems. Furthermore, they are actively innovating operational concepts to drive innovation in combat forces, thereby enhancing combat effectiveness and competitive advantage.
First, strategic planning is being developed. Driven by a strategic obsession with pursuing military, political, and economic hegemony through technological dominance, the United States is accelerating its military intelligence process. In November 2023, the U.S. Department of Defense released the “Data, Analytics, and Artificial Intelligence Adoption Strategy,” aiming to expand the advanced capabilities of the entire Department of Defense system to gain a lasting military decision-making advantage. The Russian military issued what is known as “Version 3.0,” the “Russian Armaments Development Program for 2024-2033,” designed to guide weapons development over the next decade. The program emphasizes continued advancement in nuclear and conventional weapons development, with a focus on research into artificial intelligence and robotics, hypersonic weapons, and other strike weapons based on new physical principles.
Second, the development of advanced equipment systems. Since 2005, the U.S. military has released a “Roadmap for Unmanned Systems” every few years to envision and design unmanned system platforms in various fields, including air, ground, and surface/underwater, connecting the development chain of unmanned weapons and equipment from research and development to production, testing, training, combat, and support. Currently, more than 70 countries worldwide are capable of developing unmanned system platforms, and various types of drones, unmanned vehicles, unmanned boats (vessels), and unmanned underwater vehicles are emerging rapidly. On July 15, 2024, former Chairman of the Joint Chiefs of Staff Mark Milley stated in an interview with *Defense News* that by 2039, one-third of the U.S. military force will be composed of robots. The Russian military’s Platform-M combat robot, the “Lancet” suicide drone, and the S-70 “Hunter” heavy drone have already been deployed in combat.
Third, innovate future operational concepts. Operational concepts are forward-looking studies of future warfare styles and methods, often guiding new force organization and leapfrog development of weaponry. In recent years, the US military has proposed operational concepts such as “distributed lethality,” “multi-domain warfare,” and “mosaic warfare,” attempting to guide the direction of military transformation. Taking “mosaic warfare” as an example, this concept treats various sensors, communication networks, command and control systems, and weapon platforms as “mosaic fragments.” These “fragment” units, empowered by artificial intelligence technology, can be dynamically linked, autonomously planned, and collaboratively combined through network information systems, forming an on-demand integrated, highly flexible, and mobile lethality network. In March 2022, the US Department of Defense released the “Joint All-Domain Command and Control (JADC2) Strategic Implementation Plan,” which aims to expand multi-domain operations to an all-domain operations concept, connecting sensors from various services to a unified “Internet of Things” and using artificial intelligence algorithms to help improve operational command decisions. ③
War and conflict have spurred the weaponization of artificial intelligence.
In recent years, local conflicts such as the Libyan conflict, the Nagorno-Karabakh conflict, the Ukraine crisis, and the Kazakh-Israeli conflict have continued, further stimulating the development of the weaponization of artificial intelligence.
In the Libyan conflict, both sides employed various types of drones for reconnaissance and combat missions. A report by the UN Group of Experts on Libya noted that the Turkish-made Kargu-2 drone conducted a “pursuit and long-range engagement” operation in Libya in 2020, autonomously attacking retreating enemy soldiers. This event marked the first use of a lethal autonomous weapon system in actual combat. As American scholar Zachary Callenburn stated, if anyone were to die in such an autonomous attack, it would likely be the first known instance of an AI-powered autonomous weapon being used for killing. In the 2020 Nagorno-Karabakh conflict, Azerbaijan successfully penetrated Armenian air defenses using a formation of Turkish-made TB2 “Standard” drones and Israeli-made Harop drones, gaining air superiority and the initiative. The significant success of Azerbaijani drone warfare largely stemmed from the Armenian army’s underestimation of the enemy’s capabilities and insufficient understanding of the importance and threat posed by drones in modern warfare. Secondly, from the perspective of offensive strategy, the Azerbaijani army has made bold innovations in drone warfare. They have flexibly utilized advanced equipment such as reconnaissance and strike drones and loitering munitions, which has not only improved combat efficiency but also greatly enhanced the surprise and lethality of the battles. ⑤
During the 2022 Ukraine crisis, both Russia and Ukraine extensively used military-grade and commercial drones for reconnaissance, surveillance, artillery targeting, and strike missions. The Ukrainian army, through the use of the TB2 “Standard” drone and the US-supplied “Switchblade” series of suicide drones, conducted precision strikes and achieved high kill rates, becoming a notorious “battlefield killer.” In the Israeli-Kazakhstan conflict, the Israeli military was accused of using an artificial intelligence system called “Lavender” to identify and lock onto bombing targets in Gaza, marking as many as 37,000 Palestinians in Gaza as suspected “militants” and identifying them as targets for direct assassination. This Israeli military action drew widespread international attention and condemnation.
Security risks arising from the weaponization of artificial intelligence
From automated command systems to intelligent unmanned combat platforms, and then to intelligent decision-making systems in cyber defense, the application of artificial intelligence (AI) technology in the military field is becoming increasingly widespread and has become an indispensable part of modern warfare. However, with the trend of weaponizing AI, its misuse, abuse, and even malicious use will also bring significant risks and challenges to international security.
It intensifies the arms race and disrupts the strategic balance.
In the information and intelligent era, the disruptive potential of artificial intelligence is irresistible to major military powers, who are all focusing on the development and application of AI military capabilities, fearing that falling behind in this field will result in missing strategic opportunities. Deepening the military application of artificial intelligence can achieve “asymmetric advantages” in a lower cost and with higher efficiency.
First, countries are vying for “first-mover advantage.” When a country achieves a technological lead in the development of intelligent weapon systems, it signifies that the country possesses more advanced artificial intelligence and related application capabilities, giving it a first-mover advantage in weapon system development, control, and contingency response. This advantage includes higher autonomy, intelligence, and adaptability, thereby increasing the country’s military strength and strategic competitive advantage. At the same time, the military advantage of a first-mover can become a security threat to competitors, leading to a competitive race among countries to advance the military application of advanced technologies. ⑦ In August 2023, U.S. Deputy Secretary of Defense Kathleen Hicks announced the “Replicator initiative,” which aims to deploy thousands of “autonomous weapon systems” in the Indo-Pacific region in less than two years. ⑧
Secondly, the lack of transparency in the development of AI-based military equipment by various countries may exacerbate the arms race. This is mainly due to two reasons: First, AI technology is an “enabling technology” that can be used to design a variety of applications. This means that verifying the specific military applications of AI is extremely difficult, unlike nuclear weapons, where monitoring uranium, centrifuges, and weapon and delivery systems can help determine whether a country is developing or deploying nuclear weapons. The differences between semi-autonomous and fully autonomous weapon systems are primarily due to differences in computer software algorithms, making it difficult to verify treaty compliance through physical means. Second, to maintain their strategic advantage, countries often keep details of the military applications of advanced technologies secret, preventing adversaries from discerning their strategic intentions. In the current international environment, this lack of transparency not only intensifies the arms race but also sows the seeds for future escalation of conflict.
Third, the uncertainty of national strategic intentions also exacerbates the arms race. The impact of artificial intelligence on strategic stability, nuclear deterrence, and the escalation of war largely depends on other countries’ perception of its capabilities, rather than its actual capabilities. As American scholar Thomas Schelling pointed out, international relations often feature risk competition, testing courage more than force. The relationship between major adversaries is determined by which side is ultimately willing to invest more power, or to make it appear as if it is about to invest more power.⁹ An actor’s perception of the capabilities of others, whether true or false, significantly influences the progress of the arms race. If a country vigorously develops intelligent weapon systems, competitors, uncertain of the other’s intentions, will become suspicious of the competitor’s military capabilities and the intentions behind their military development, often taking reciprocal measures, namely, developing their own military to meet their own security needs. It is this ambiguity of intention that stimulates technological accumulation, exacerbates the instability of weapons deployment, and ultimately leads to a vicious cycle.
Empowering operational processes increases the risk of conflict.
Empowered by big data and artificial intelligence technologies, traditional combat processes will undergo intelligent restructuring, shifting from “situational awareness—command and decision-making—offensive and defensive coordination—comprehensive support” to “intelligent situational awareness across the entire domain—human-machine integrated hybrid decision-making—manned/unmanned autonomous coordination—proactive and on-demand precise support.” However, while this intelligent restructuring of combat processes improves operational efficiency and accuracy, it also increases the risk of conflict and miscalculation.
First, wars that break out at “machine speed” will increase the risk of hasty action. Artificial intelligence weapon systems demonstrate formidable capabilities in precision and reaction speed, making future wars likely to erupt at “machine speed.”⑩ However, excessively rapid warfare will also increase the risk of conflict. In areas that emphasize autonomy and reaction speed, such as missile defense, autonomous weapon systems, and cyberspace, faster reaction times will bring significant strategic advantages. At the same time, they will drastically reduce the time window for the defending side to react to military actions, placing commanders and decision-makers under immense “time pressure,” exacerbating the risk of “hasty action,” and increasing the possibility of unexpected escalation of the crisis.
Second, relying on system autonomy may increase the probability of misjudgment under pressure. The U.S. Department of Defense believes that “highly autonomous artificial intelligence systems can autonomously select and execute corresponding operations based on dynamic changes in mission parameters, efficiently achieving human-preset goals. Increased autonomy not only significantly reduces reliance on human labor and improves overall operational efficiency, but is also regarded by defense planners as a key element in maintaining tactical leadership and ensuring battlefield advantage.” ⑪ However, because human commanders cannot react quickly enough, they may gradually delegate control to autonomous systems, increasing the probability of misjudgment. In March 2003, the U.S. Patriot missile system mistakenly identified a friendly Tornado fighter jet as an anti-radiation missile. Under pressure with only a few seconds to react, the commanders chose to launch the missile, resulting in the deaths of two pilots.⑫
Third, it weakens the effectiveness of crisis termination mechanisms. During the Cold War, the US and the Soviet Union spearheaded a series of restrictive measures to curb the escalation of crises and prevent them from evolving into large-scale nuclear war. In these measures, humans played a crucial “monitoring” role, able to initiate termination measures within sufficient time to avert large-scale humanitarian catastrophes should a risk of spiraling out of control. However, with the increasing computing power of artificial intelligence systems and their deep integration with machine learning, combat responses have become more rapid, precise, and destructive, potentially weakening human intervention mechanisms for crisis termination.
Accountability for war is difficult, and collateral damage is increased.
Artificial intelligence weapon systems make it more difficult to define responsibility in war. In traditional warfare, weapon systems are controlled by humans, and if errors or crises occur, the human operator or the developer of the operating system bears the corresponding responsibility. Artificial intelligence technology itself weakens human agency and control, making the attribution of responsibility for technical actions unclear.
First, there’s the “black box” problem of artificial intelligence. While AI has significant advantages in processing and analyzing data, its internal operating principles and causal logic are often difficult for humans to understand and explain. This makes it challenging for programmers to correct erroneous algorithms, a problem often referred to as the “black box” of algorithmic models. If an AI-powered weapon system poses a security threat, the “algorithm black box” could become a convenient excuse for those responsible to shirk accountability. Those seeking accountability would face generalized blame-shifting and deflection, ultimately pointing the finger at the AI weapon system. In practice, the inability to understand and explain the decision-making process of AI can lead to a series of problems, such as decision-making errors, trust crises, and information misuse.
Secondly, there is the issue of delineating human-machine responsibility in military operations. When an AI system malfunctions or makes a decision-making error, should it be treated as an independent entity and held responsible? Or should it be considered a tool, with human operators bearing all or part of the responsibility? The complexity of this responsibility delineation lies not only in the technical aspects but also in the ethical and legal ones. On the one hand, although AI systems can make autonomous decisions, their decision-making process is still limited by human-preset programs and algorithms, therefore their responsibility cannot be completely independent of humans. On the other hand, in certain situations, AI systems may exceed the pre-set limits of humans and make independent decisions; how to define their responsibility in such cases also becomes a difficult problem in the field of arms control.
Thirdly, there is the issue of the allocation of decision-making power between humans and AI weapon systems. Depending on the level of machine autonomy, AI systems can execute tasks in three decision-making and control modes: semi-autonomous, supervised autonomy, and fully autonomous. In semi-autonomous systems, human decision-making power rests with the user; in supervised autonomy, humans supervise and intervene when necessary; in fully autonomous operations, humans do not participate in the process. As the military application of AI deepens, the role of humans in combat systems is gradually shifting from the traditional “human-in-the-loop” model to “human-on-the-loop,” evolving from direct controllers within the system to external supervisors. However, this shift also raises new questions. How to ensure that AI weapon systems adhere to human ethics and values while operating independently is a major challenge currently facing the field of AI weapon development.
Lowering the threshold for dissemination leads to misuse and abuse.
Traditional strategic competition typically involves large-scale weapons system development and procurement, requiring substantial financial and technological support. With the maturation and diffusion of artificial intelligence (AI) technology, its accessibility and low cost make it possible for even small and medium-sized countries to develop advanced intelligent weapons systems. Currently, strategic competition in the field of military AI is primarily concentrated among major military powers such as the US and Russia. However, in the long run, the proliferation of AI technology will broaden the scope of strategic competition, posing a disruptive threat to the existing strategic balance. Once smaller countries possessing AI technology achieve relatively strong competitiveness, their willingness to confront threats from major powers may increase.
First, artificial intelligence (AI) facilitates the development of lightweight and agile combat methods, encouraging smaller states and non-state actors to engage in small-scale, opportunistic military adventures to achieve their strategic objectives at a lower cost and with more diverse means. Second, the rapid development of AI has led to the increasing prominence of new forms of warfare such as cyber warfare and electronic warfare. In a highly competitive battlefield environment, malicious third-party actors can manipulate information to influence military planning and strategic deterrence, leading to escalation. The 2022 Ukraine crisis saw numerous instances of online disinformation used to confuse the public. Third, the widespread application of AI technology has also reduced strategic transparency. Traditional military strategies often rely on extensive intelligence gathering, analysis, and prediction; however, with the assistance of AI, operational planning and decision-making processes become more complex and unpredictable. This lack of transparency can lead to misunderstandings and misjudgments, thereby increasing the risk of conflict escalation.
Governance Path of Artificial Intelligence Weaponization Security Risks
To ensure the safe development of artificial intelligence and avoid the potential harm caused by its weaponization, we should strengthen international communication on governance strategies, seek consensus and cooperation among countries on the military applications of artificial intelligence, promote dialogue and coordination on laws and regulations to form a unified and standardized legal framework, strengthen ethical constraints on artificial intelligence to ensure that technological development conforms to ethical standards, and actively participate in global security governance cooperation to jointly safeguard the peace and stability of the international community.
We attach great importance to strategic communication at the international level.
Artificial intelligence governance is a global issue that requires concerted efforts from all countries to resolve. On the international stage, the interests of nations are intertwined yet conflicting; therefore, addressing global issues through effective communication channels is crucial for maintaining world peace and development.
On the one hand, it is essential to accurately grasp the challenges of international governance of artificial intelligence. This involves understanding the consensus among nations on the weaponization of AI, while also closely monitoring policy differences among countries regarding the security governance of AI weaponized applications. Through consultation and cooperation, relevant initiatives should be aligned with the UN agenda to effectively prevent the misuse of AI for military purposes and promote its peaceful application.
On the other hand, it is crucial to encourage governments to reach relevant agreements and build strategic mutual trust through official or semi-official dialogues. Compared to the “Track 1 dialogue” at the government level, “Track 1.5 dialogue” refers to dialogues involving both government officials and civilians, while “Track 2 dialogue” is a non-official dialogue conducted by academics, retired officials, and others. These two forms of dialogue offer greater flexibility and serve as important supplements and auxiliary means to official intergovernmental dialogues. Through diverse dialogue methods, officials and civilians can broadly discuss possible paths to arms control, share experiences and expertise, and avoid escalating the arms race and worsening tensions. These dialogue mechanisms will provide countries with a continuous platform for communication and cooperation, helping to enhance mutual understanding, strengthen strategic mutual trust, and jointly address the challenges posed by the militarization of artificial intelligence.
Scientifically formulate laws and ethical guidelines for artificial intelligence.
Artificial intelligence (AI) technology itself is neither right nor wrong, good nor evil. However, there are certainly distinctions of good and evil intentions in the design, research and development, manufacturing, use, operation, and maintenance of AI. The weaponization of AI has sparked widespread ethical concerns. Under the framework of international law, can autonomous weapon systems accurately distinguish between combatants and civilians on complex battlefields? Furthermore, if AI weapon systems cause unintended harm, how should liability be determined? Is entrusting life-or-death decision-making power to machines in accordance with ethical standards? These concerns highlight the necessity of strengthening ethical constraints on AI.
On the one hand, it is essential to prioritize ethics and integrate the concept of “intelligent for good” from the very source of technology. In the design of AI military systems, values such as human-centeredness and intelligent for good should be embedded within the system. The aim is to prevent potential indiscriminate killing and harm caused by AI at the source, control its excessive destructive power, and prevent accidental damage, thereby limiting the extent of damage caused by AI weapon systems to the smallest possible range. Currently, nearly a hundred institutions and government departments both domestically and internationally have published various AI ethics principles documents, and the academic and industrial communities have reached a consensus on basic AI ethical principles. In 2022, China’s “Position Paper on Strengthening Ethical Governance of Artificial Intelligence,” submitted to the United Nations, provided an important reference for the development of global AI ethics regulation. The document explicitly emphasizes that AI ethics regulation should be promoted through measures such as institutional construction, risk management, and collaborative governance.
On the other hand, it is necessary to improve relevant laws and regulations and clarify the boundaries of rights and responsibilities of artificial intelligence entities. Strict technical review standards should be established to ensure the safety and reliability of AI systems. Comprehensive testing should be conducted before AI systems are deployed to ensure they do not negatively impact human life and social order. The legal responsibilities of developers, users, maintainers, and other parties throughout the entire lifecycle of AI systems should be clearly defined, and corresponding accountability mechanisms should be established.
We will pragmatically participate in international cooperation on artificial intelligence security governance.
The strategic risks posed by the military applications of artificial intelligence further highlight the importance of pragmatic international security cooperation. It is recommended to focus on three key areas:
First, we should promote the formulation of guidelines for the application of artificial intelligence in the military field. Developing codes of conduct for the military application of artificial intelligence is an important responsibility of all countries in regulating its military use, and a necessary measure to promote international consensus and comply with international regulations. In 2021, the Chinese government submitted its “Position Paper on Regulating the Military Application of Artificial Intelligence” to the UN Convention on Certain Conventional Weapons Conference, and in 2023, it released the “Global Artificial Intelligence Governance Initiative,” both of which provide constructive references for improving the codes of conduct for regulating the military application of artificial intelligence.
Second, it is essential to establish a suitable regulatory framework. The dual-use nature of artificial intelligence (AI) involves numerous stakeholders, making the role of non-state actors such as NGOs, technical communities, and technology companies increasingly prominent in the global governance of AI, thus becoming a crucial force in building a regulatory framework for the military application of AI. Technical regulatory measures that countries can adopt include: clarifying the scope of AI technology use, responsible parties, and penalties for violations; strengthening technological research and development to improve the security and controllability of the technology; and establishing regulatory mechanisms to monitor the entire process of technology research and development and application, promptly identifying and resolving problems.
Third, we will jointly develop technologies and solutions for AI security. We encourage the inclusion of bilateral or multilateral negotiations between governments and militaries in the dialogue options for military AI applications, and promote extensive exchanges on military AI security technologies, operating procedures, and practical experience. We will also promote the sharing and reference of relevant risk management technical standards and usage norms, and continuously inject new stabilizing factors into the international security and mutual trust mechanism in the context of the militarization of AI.
(The author is the director and researcher of the National Defense Science and Technology Strategy Research Think Tank at the National University of Defense Technology, and a doctoral supervisor; Liu Hujun, a master’s student at the School of Foreign Languages of the National University of Defense Technology, also contributed to this article.)
現代國語:
朱啟超
《人民論壇》(2025年02月05日 第 02版)
【摘要】人工智能武器化是新一輪軍事變革的必然趨勢,近年來的局部戰爭衝突進一步刺激相關國家推進人工智能武器化戰略部署,搶占未來戰爭制高點。人工智能武器化的潛在風險不容忽視,將可能加劇軍備競賽,打破戰略平衡;賦能作戰流程,加大衝突風險;提升問責難度,增加附帶傷亡;降低擴散門檻,導致誤用濫用。對此,應加強國際間戰略溝通,確保各國在人工智能軍事應用上的共識與協作;推進法律法規建設的對話與協調,以形成統一規範的法律框架;加強人工智能倫理約束,確保技術發展符合道德標準;積極參與全球安全治理合作,共同維護國際社會的和平與穩定。
【關鍵詞】人工智能 軍事應用 安全風險 安全治理 【中圖分類號】F113 【文獻標識碼】A
人工智能武器化,是將人工智能相關技術、平台與服務應用到軍事領域,使其成為賦能軍事行動的重要驅動力量,進而提升軍事行動的效率、精準度和自主性。隨著人工智能技術在軍事領域的廣泛應用,各主要大國和軍事強國紛紛加大戰略與資源投入,加快研發應用步伐。近年來頻發的地區戰爭衝突也進一步刺激了人工智能的戰場運用,並深刻形塑戰爭形態以及軍事變革的未來走向。
不容忽視的是,人工智能作為一類快速發展中的技術,其本身由於內在技術的不成熟、場景匹配的不准確、支持條件的不完備,可能存在潛在風險,而由於人為的誤用、濫用甚至惡意使用,也容易給軍事領域乃至國際安全領域帶來多種風險挑戰。認真貫徹落實習近平總書記提出的全球安全倡議,必須直面世界範圍內人工智能武器化的發展趨勢,深入分析人工智能武器化應用可能帶來的安全風險,並思考科學可行的治理思路與舉措。
當前人工智能武器化的發展趨勢
近年來,人工智能在軍事領域的應用,正在從根本上重塑未來戰爭形態、改變未來作戰體系,影響軍事變革的未來走向。主要軍事大國已將人工智能視為改變未來戰爭規則的顛覆性關鍵技術,紛紛挹注大量資源,推進人工智能武器的研發與應用。
人工智能武器化是軍事變革的必然趨勢。
隨著科學技術的飛速發展,軍事變革的必要性與緊迫性愈發凸顯。人工智能通過模擬人類的思維過程,延展人類的腦力與體力,可實現信息快速處理、分析和決策,可研發日益複雜的無人化武器系統平台,從而為軍事行動提供前所未有的智能化支持。
一是為軍事情報偵察與分析提供智能支持。傳統的情報偵察方式受到人力和時間等多重因素制約,難以有效應對大規模、高速度和高複雜度的情報處理需求。人工智能技術的引入,為情報偵察領域帶來革新和突破。在軍事基礎設施中,應用人工智能技術,可構建智能監測系統,提供高精度實時的情報感知服務。在情報偵察領域,人工智能技術具備對多個“信息流”進行實時處理的能力,從而極大地提高分析效率。 ①通過使用深度學習等技術工具,還可以“透過現像看本質”,挖掘出各類碎片化情報信息中的深層脈絡與因果聯繫,將海量碎片化數據快速轉變為可以利用的情報,從而提升情報分析的質效。
二是為作戰指揮與決策提供數據支持。人工智能在戰場態勢感知方面為作戰指揮和軍事決策提供有力支持。 ②其優勢在於能夠進行數據挖掘、數據融合以及預測分析等關鍵任務。在信息化智能化戰爭中,戰場環境瞬息萬變,情報信息量龐大,要求決策響應迅速且準確。因此,先進的計算機系統就成為協助指揮人員管理情報數據、進行敵情判斷、提出作戰方案建議以及擬制計劃與命令的重要工具。以美軍為例,美國雷神技術公司(Raytheon Technologies Corporation)研製的ISTAR(情報、監視、目標識別和跟踪)系統,涵蓋了情報採集、監視、目標識別及跟踪功能,可匯聚來自衛星、艦船、飛機及地面站等多元信息源的數據,並對其進行深度分析與處理。這不僅顯著提高了指揮官獲取信息的速度,而且可藉助智能分析系統提供數據支持,使決策更加快速、高效和精準。
三是為無人作戰系統提供重要支撐。無人作戰系統是一種無需人類直接操縱,便可獨立完成軍事任務的新型武器裝備系統,主要包括智能化無人作戰平台、智能化彈藥和智能化作戰指揮控制系統等組成部分,具備顯著的自主性和智能化特徵。無人作戰系統,作為引領未來戰爭形態變革的技術裝備,已成為國家間軍事競爭的重要籌碼。該系統通過運用自主導航、目標識別、路徑規劃等關鍵技術,實現了不同戰場環境及作戰空間的適應能力。借助深度學習、強化學習等先進算法,無人作戰系統能夠獨立完成導航任務,並實現精準打擊目標。這種系統的設計理念是“平台無人,系統有人”,其本質是對有人作戰系統的智能化延伸。例如,美國國防部高級研究計劃局(DARPA)研發的“MQM-57獵鷹者”無人機,就採用了先進的人工智能技術,具備高度自主的目標識別和追踪功能。
四是為軍事後勤與裝備保障提供技術支持。在信息化戰爭的背景下,戰爭進程加快、機動性提升、作戰消耗顯著增加。傳統的“超量預儲”保障模式已無法適應現代戰場快速變化的需求,因此,對作戰部隊進行適時、適地、適需、適量的快速精確後裝保障提出了更高的要求。人工智能作為一種具有溢出帶動和交叉融合特性的技術,與物聯網、大數據、雲計算等前沿技術相互融合,使得人工智能知識群、技術群和產業群全面滲透到軍事後裝領域,顯著提升了後勤裝備保障能力。
主要國家紛紛佈局人工智能軍事應用。
為增強在人工智能領域的全球競爭力,美國、俄羅斯、日本等主要大國加緊對人工智能軍事應用的戰略佈局。首先,通過更新和調整人工智能領域的頂層戰略規劃,為未來的發展提供明確指導;其次,針對未來戰爭需求,加快人工智能技術與軍事領域的深度融合,推動裝備系統的智能化、自主化和無人化發展;此外,積極創新作戰概念,以驅動作戰力量創新,進而提升作戰效能和競爭優勢。
一是製定戰略規劃。基於技術霸權追求軍事霸權、政治霸權、經濟霸權的戰略偏執,美國正加快自身軍事智能化進程。 2023年11月,美國國防部發布《數據、分析與人工智能採用戰略》,旨在擴展整個國防部體系的先進能力,以獲得持久的軍事決策優勢。俄軍頒布被稱為“3.0版本”的《2024年至2033年俄羅斯武器裝備發展綱要》,旨在為未來10年武器裝備發展提供指導,綱要強調繼續推進核武器和常規武器建設,並重點研究人工智能和機器人技術、高超音速武器和其他基於新物理原理的打擊兵器。
二是研發先進裝備系統。美軍自2005年開始每隔幾年都會發布一版“無人系統路線圖”,以展望並設計空中、地面、水面/水下等各領域無人系統平台,貫通研發—生產—測試—訓練—作戰—保障等無人化武器裝備發展鏈路。目前,世界上已有70多個國家可以研發無人化系統平台,各種類型的無人機、無人車、無人船(艇)、無人潛航器如雨後春筍般不斷出現。 2024年7月15日,美軍參聯會前主席馬克·米利接受《美國防務新聞》採訪時稱,到2039年,三分之一的美軍部隊將由機器人組成。俄軍研發的平台-M作戰機器人、“柳葉刀”自殺式無人機和S70“獵人”重型無人機等,已投入實戰檢驗。
三是創新未來作戰概念。作戰概念是對未來戰爭樣式與作戰方式進行的前瞻性研究,往往可牽引新的作戰力量編組及武器裝備跨越發展。美軍近年來先後提出“分佈式殺傷”“多域戰”“馬賽克戰”等作戰概念,試圖引領軍事變革的發展方向。以“馬賽克戰”為例,該作戰概念將各種傳感器、通信網絡、指揮控制系統、武器平台等視為“馬賽克碎片”,這些“碎片”單元在人工智能技術賦能支持下,通過網絡信息系統可動態鏈接、自主規劃、協同組合,從而形成一個按需集成、極具彈性、靈活機動的殺傷網。 2022年3月,美國國防部發布《聯合全域指揮控制(JADC2)戰略實施計劃》,該計劃旨在將多域作戰向全域作戰概念拓展,將各軍種傳感器連接到一個統一“物聯網”中,利用人工智能算法幫助改善作戰指揮決策。 ③
戰爭衝突刺激人工智能武器化進程。
近年來,利比亞衝突、納卡衝突、烏克蘭危機、哈以沖突等局部衝突不斷,進一步刺激了人工智能武器化的發展進程。
在利比亞衝突中,交戰雙方採用多種型號無人機執行偵察和作戰任務。據聯合國利比亞問題專家小組發布的報告指出,土耳其製造的“卡古-2”(Kargu-2)無人機2020年在利比亞執行了“追捕並遠程交戰”行動,可自主攻擊撤退中的敵方士兵。這一事件標誌著致命性自主武器系統在實戰中的首次運用。如美國學者扎卡里·卡倫伯恩所述,若有人在此類自主攻擊中不幸喪生,這極有可能是歷史上首個已知的人工智能自主武器被用於殺戮的例子。在2020年納卡衝突中,阿塞拜疆運用土耳其生產的“旗手”TB2無人機編隊和以色列生產的“哈洛普”無人機成功突破了亞美尼亞防空系統,掌握了戰場製空權和主動權。 ④ 阿塞拜疆軍隊無人機作戰的顯著成效,在很大程度上源於亞美尼亞軍隊的“輕敵”心態,對無人機在現代戰爭中的重要性和威脅性認識不足。其次,從進攻策略的角度來看,阿塞拜疆軍隊在無人機戰法上進行了大膽的創新。他們靈活運用察打一體無人機和巡飛彈等先進裝備,不僅提升了作戰效率,也大大增強了戰鬥的突然性和致命性。 ⑤
在2022年爆發的烏克蘭危機中,俄羅斯和烏克蘭都廣泛使用軍用級和商用無人機執行偵察監視、火砲瞄準和打擊任務。烏克蘭軍隊通過使用“旗手”TB2無人機以及美國援助的“彈簧刀”系列自殺式無人機,實施精確打擊和高效殺傷,成為令世界矚目的“戰場殺手”。在哈以沖突中,以色列軍方被指控使用名為“薰衣草”(Lavender)的人工智能係統來識別並鎖定加沙境內的轟炸目標,曾將多達3.7萬名加沙巴勒斯坦人標記為“武裝分子”嫌疑對象,並將其認定為可直接“暗殺”的目標,以軍行動引發了國際社會廣泛關注和譴責。 ⑥
人工智能武器化帶來的安全風險
從自動化指揮系統到智能無人作戰平台,再到網絡防禦中的智能決策系統,人工智能技術在軍事領域的應用正變得愈發普遍,已成為現代戰爭不可或缺的一部分。然而,人工智能武器化的趨勢下,其誤用、濫用甚至惡意使用,也將給國際安全帶來不可忽視的風險挑戰。
加劇軍備競賽,打破戰略平衡。
在信息化智能化時代,人工智能所具有的顛覆性潛力讓軍事大國都難以抗拒,紛紛聚焦人工智能軍事能力的開發和運用,唯恐在這一領域落後而喪失戰略機遇。深化人工智能軍事應用,則能夠以更低成本、更高效率的方式獲得“非對稱優勢”。
一是各國紛紛搶抓“先行者優勢”。當一個國家在智能武器系統開發領域取得技術領先地位時,意味著該國具備更高級的人工智能和相關應用能力,使其在武器系統開發、控制和應急響應等方面具有先發優勢。這種優勢包括更高的自主性、智能化程度和自適應能力,從而增加了該國的軍事實力和戰略競爭優勢。與此同時,先行者的軍事優勢可能會成為競爭對手的安全威脅,導致各國在先進技術的軍事應用上呈現出你爭我趕的態勢。 ⑦ 2023年8月,美國國防部副部長凱瑟琳·希克斯宣布了“複製者計劃”(Replicator initiative),該倡議力求在不到兩年的時間內在印太地區部署數千個“自主武器系統”。 ⑧
二是各國人工智能軍備建設的不透明性可能加劇軍備競賽。這主要有兩個方面的原因:一是人工智能技術是一種可用於設計多種應用的“使能技術”,這意味著人工智能軍事應用具體情況核查難度較高,難以像核武器可以通過對鈾、離心機以及武器和運載系統的監測來判斷一個國家是否在進行核武器的開發或部署。半自主、完全自主武器系統之間的差別主要是由於計算機軟件算法不同導致的,很難通過物理核查手段來對各國的條約執行情況進行核查。二是各國為了保持己方的戰略優勢,往往對先進技術的軍事應用相關細節採取保密措施,從而使對手無法探知其戰略意圖。在當前國際環境中,這種不透明性不僅僅加劇了軍備競賽,更為未來衝突升級埋下了伏筆。
三是各國戰略意圖的不確定性也會加劇軍備競賽。人工智能對於戰略穩定、核威懾和戰爭升級的影響,很大程度上取決於他國對於其能力的感知,而非其實質能力。正如美國學者托馬斯·謝林指出,國際關係常常具有風險競爭的特徵,更多的是對勇氣而不是武力的考驗,主要對手之間的關係是由哪一方最終願意投入更大的力量,或者使之看起來即將投入更大的力量來決定的。 ⑨ 一個行為體對於他者能力的感知,無論真假,都會在很大程度上影響軍備競賽進程。如果一個國家大力發展智能武器系統,競爭對手在不確定對方意圖的情況下,會對競爭對手的軍備能力及發展軍備的意圖產生猜忌,往往採取對等措施,即通過發展軍備來滿足自身安全需求。正是這種意圖的模糊性刺激了技術積累,加劇武器部署的不穩定性,最終導致惡性循環。
賦能作戰流程,加大衝突風險。
在大數據和人工智能技術賦能下,傳統作戰流程將實現智能化再造,即由“態勢感知—指揮決策—攻防協同—綜合保障”向“全域態勢智能認知—人機一體混合決策—有人/無人自主協同—主動按需精准保障”轉變。然而,作戰流程的智能化再造雖然提高了作戰的效率和精確性,但也提升了衝突和誤判的風險。
一是以“機器速度”爆發的戰爭將增加倉促行動的風險。人工智能武器系統在精確度和反應速度上表現出強大的能力,使得未來戰爭將以“機器速度”爆發。 ⑩ 但戰爭速度過快也將升高衝突風險。在導彈防禦、自主武器系統和網絡空間等重視自主性以及反應速度的領域,更快的反應速度將帶來巨大的戰略優勢,同時也極大地壓縮了防禦方對軍事行動作出反應的時間窗口,導致作戰指揮員和決策者置身於巨大的“時間壓力”之下,加劇了“倉促行動”的風險,並增加了危機意外升級的可能性。
二是依賴系統自主性可能增加壓力下的誤判機率。美國國防部認為,“高度自主化的人工智能係統,能夠根據任務參數的動態變化,自主選擇並執行相應操作,高效實現人類預設的目標。自主性的增加不僅大幅減少了對人力的依賴,提高了整體操作效率,更被國防規劃者視為保持戰術領先、確保戰場優勢的關鍵要素。”⑪然而,由於人類指揮官無法作出足夠快的反應,可能逐漸將控制權下放給自主系統,增加誤判機率。 2003年3月,美國“愛國者”導彈系統曾錯誤地將友軍的“龍捲風”戰鬥機標記為反輻射導彈,指揮人員在只有幾秒鐘反應時間的壓力狀態下,選擇發射導彈,造成了兩名飛行員的死亡。 ⑫
三是削弱了危機終止機制的有效性。冷戰時期,美蘇主導構建了一系列限制性措施來遏制危機的升級,避免其演化為大規模的核戰爭。在這些措施中,人類扮演著至關重要的“監督者”角色,在可能出現風險失控時,能夠在充足的時間內啟動終止措施,避免大規模人道主義災難發生。但是,隨著人工智能係統運算能力的提升及其與機器學習的深度融合,作戰響應變得更為迅捷、精確和具有破壞性,人類對於危機的終止干預機制將可能被削弱。
戰爭問責困難,增加附帶傷亡。
人工智能武器系統使得戰爭責任更難界定。在傳統作戰模式下,由人類控制武器系統,一旦造成失誤或危機,人類操作員或者操作系統的研發者將承擔相應的責任。人工智能技術本身弱化了人類的能動性和控制能力,致使技術性行為的責任歸屬變得模糊不清。
一是人工智能“黑箱”問題。儘管人工智能在處理和分析數據方面有著顯著優勢,但是其內部運行規律和因果邏輯卻常常難以被人類理解和解釋,這使得程序員難以對錯誤算法進行糾偏除誤,這一問題常常被稱為算法模型的“黑箱”。一旦人工智能武器系統產生安全危害,“算法黑箱”可能成為相關責任方推卸責任的合理化藉口,追責者只能面臨泛化的卸責與推諉,並將責任矛頭指向人工智能武器系統。在實踐中,如果無法理解並解釋人工智能的決策過程,可能會引發一系列的問題,如決策失誤、信任危機、信息濫用等。
二是軍事行動中人機責任劃分問題。當人工智能係統出現故障或者決策失誤時,是否應將其視為一種獨立的實體來承擔責任?或者,是否應該將其視為一種工具,由人類操作者承擔全部或部分責任?這種責任劃分的複雜性不僅在於技術層面,更在於倫理和法律層面。一方面,人工智能係統雖然能夠自主決策,但其決策過程仍然受到人類預設的程序和算法限制,因此其責任不能完全獨立於人類。另一方面,人工智能係統在某些情況下可能會超越人類的預設範圍,作出獨立的決策,此時其責任又該如何界定,也成為軍控領域的難題。
三是人與人工智能武器系統的決策權分配問題。按照機器自主權限的不同,人工智能係統能夠以半自主、有監督式自主以及完全自主三種決策與控制方式執行任務。在半自主系統中,行動的決策權由人類掌控;在有監督式自主行動中,人類實施監督並在必要時干預;在完全自主行動中,人類不參與行動過程。隨著人工智能軍事應用程度的逐漸加深,人在作戰系統中的角色正經歷由傳統的“人在迴路內”模式逐步向“人在迴路上”轉變,人類從系統內部的直接操控者演化為系統外部的監督者。然而,這一轉變也引發了新的問題。如何確保人工智能武器系統在獨立運作時仍能遵循人類倫理和價值觀,這是當前人工智能武器研發領域面臨的重大挑戰。
降低擴散門檻,導致誤用濫用。
傳統的戰略競爭通常涉及大規模的武器系統研發和採購,需要大量資金和技術支持。人工智能技術成熟擴散後,具有易獲取且價格低廉等優勢,即便是中小國家也可能具備開發先進智能武器系統的能力。當前,軍用人工智能領域的戰略競爭主要集中在美俄等軍事大國之間。但長遠來看,人工智能技術的擴散將擴大戰略競爭的範圍,對現有的戰略平衡構成破壞性威脅。一旦掌握人工智能技術的較小規模國家擁有相對較強的競爭力,這些國家在面臨大國威脅時發起對抗的意願可能就會增強。
一是人工智能有助於發展一些輕便靈巧的作戰手段,從而鼓勵一些中小國家或者非國家行為體利用其開展小型的、機會主義的軍事冒險,以更低廉的成本和更豐富的途徑來達到其戰略目地。二是人工智能的快速發展使得網絡戰、電子戰等新型戰爭形態日益凸顯。在競爭激烈的戰場環境中,惡意的第三方行為體可以通過操縱信息來影響軍事規劃和戰略威懾,導致局勢升級。在2022年爆發的烏克蘭危機中,就有眾多網絡虛假信息傳播混淆視聽。三是人工智能技術的廣泛應用還降低了戰略透明度。傳統的軍事戰略往往依賴於大量的情報收集、分析和預測,而在人工智能技術的輔助下,作戰計劃和決策過程變得更加複雜和難以預測。這種不透明性可能導致誤解和誤判,從而增加了衝突升級的風險。
人工智能武器化安全風險的治理路徑
為確保人工智能安全發展,避免其武器化帶來的潛在危害,應加強國際間的治理戰略溝通,尋求各國在人工智能軍事應用方面的共識與協作;推進法律法規對話協調,以形成統一規範的法律框架;加強人工智能倫理的約束,確保技術發展符合道德標準;積極參與全球安全治理合作,共同維護國際社會的和平與穩定。
高度重視國際層面戰略溝通。
人工智能治理是一個全球性問題,需要各國通力合作,共同解決。在國際舞台上,各國利益交融與利益衝突並存,因此,通過有效的溝通渠道來處理全球性問題成為維護世界和平與發展的關鍵。
一方面,要準確把握人工智能國際治理挑戰。既要把握各國對人工智能武器化發展的共識,也要密切關注各國在人工智能武器化應用安全治理方面的政策差異,通過協商合作,使相關倡議與聯合國議程相協調,從而有效防止人工智能在軍事上的濫用,推動人工智能用於和平目的。
另一方面,推動各國政府通過官方或半官方對話,達成相關協議,建立戰略互信。相較於政府層面的“1軌對話”,“1.5軌對話”指的是政府官員與民間人士共同參與的對話,而“2軌對話”則是由學者、退休官員等進行的民間非官方形式的對話。這兩種對話形式具有更高的靈活性,是政府間官方對話的重要補充和輔助手段。通過多樣化的對話交流方式,官方和民間人士可以廣泛磋商軍備控制的可能實現路徑,分享經驗和專業知識,以避免軍備競賽的升級和緊張局勢的惡化。這些對話機制將為各國提供持續的溝通與合作平台,有助於增進相互理解、加強戰略互信,共同應對人工智能軍事化應用帶來的挑戰。
科學制定人工智能法律和倫理規約。
人工智能技術本身並無對錯善惡之分,但對於人工智能的設計、研發、製造、使用、運行以及維護確有善惡意圖之別。人工智能武器化引發了廣泛的倫理關注。國際法框架下,自主武器系統是否能夠在復雜戰場上精準區分戰鬥人員與平民?此外,若人工智能武器系統導致非預期的傷害,其責任歸屬如何界定?將關乎生死的決策權交付於機器,這一做法是否符合道德倫理標準?這些擔憂凸顯了加強人工智能倫理約束的必要性。
一方面,要堅持倫理先行,從技術源頭上融入“智能向善”的理念。在人工智能軍事系統的設計過程中,將以人為本、智能向善等價值觀內嵌於系統中。其目的是從源頭上杜絕人工智能可能引發的濫殺濫傷行為,控制其過度殺傷力,防範意外毀傷的發生,從而將人工智能武器系統所帶來的毀傷程度限制在盡可能小的範圍內。目前,國內外已有近百家機構或政府部門發佈各類人工智能倫理原則文件,學術界和產業界亦就人工智能基本倫理原則達成共識。 2022年,中國向聯合國遞交的《關於加強人工智能倫理治理的立場文件》為全球人工智能倫理監管的發展提供了重要參考。文件明確強調,應通過制度建設、風險管控、協同共治等多方面的措施來推進人工智能倫理監管。
另一方面,要完善相關法律法規,明確人工智能主體的權責邊界。制定嚴格的技術審核標準,確保人工智能係統的安全性和可靠性。在人工智能係統上線前進行全面的測試,確保其不會對人類生活和社會秩序造成負面影響。明確開發者、使用者、維護者等各方在人工智能係統全生命週期中的法律責任,以及建立相應的追責機制。
務實參與人工智能安全治理國際合作。
人工智能軍事應用所帶來的戰略風險,更加凸顯出國際安全務實合作的重要性。建議重點從三個方面著手:
一是推動制定人工智能在軍事領域的運用準則。制定人工智能軍事應用的行為準則,是各國規範人工智能軍事應用的重要責任,也是推動國際共識和遵守國際法規的必要舉措。中國政府2021年向聯合國《特定常規武器公約》大會提交了《中國關於規範人工智能軍事應用的立場文件》,2023年發布《全球人工智能治理倡議》,這些都為完善規範人工智能軍事應用的行為準則提供了建設性參考。
二是建立適用的監管框架。人工智能軍民兩用性使其涉及眾多利益攸關方,一些非國家行為體如非政府組織、技術社群、科技企業在人工智能全球治理進程中的作用將更加突出,成為人工智能軍事應用監管框架建設的重要力量。各國可採取的技術監管措施包括:明確人工智能技術的使用範圍、責任主體和違規處罰措施;加強技術研發,提高技術的安全性和可控性;建立監管機制,對技術的研發和應用進行全程監管,及時發現和解決問題。
三是共同研發人工智能安全防範技術和解決方案。鼓勵將政府間和軍隊間的雙邊或多邊談判納入軍用人工智能應用的對話選項,就軍用人工智能安全防範技術、操作規程及實踐經驗廣泛交流,推動相關風險管理技術標準和使用規範的分享借鑒,為人工智能軍事化背景下的國際安全互信機制不斷注入新的穩定因素。
(作者為國防科技大學國防科技戰略研究智庫主任、研究員,博導;國防科技大學外國語學院碩士研究生劉胡君對本文亦有貢獻)
【註釋】
①Katz B. Analytic edge: Leveraging emerging technologies to
transform intelligence analysis [R]. Washington D.C.: Center for
Strategic and International Studies, 2020.
②Paul McLeary. Pentagon’s Big AI Program, Maven, Already
Hunts Data in Middle East, Africa[N]. Breaking Defense, May 1, 2018.
③唐新華:《美國綜合威懾戰略中的技術互操作性》,《太平洋學報》, 2022年第12期,第15-25頁。
aijan’s Drones Owned the Battlefield in
Nagorno-Karabakh—and Showed Future of Warfare[N]. The
Washington Post, November 11, 2020.
⑤朱啟超、陳曦、龍坤:《無人機作戰與納卡衝突》,《中國國際戰略評論》,2020年第2期,第167-183頁。
⑥The Verge Report: Israel used AI to identify bombing targets in
Gaza [EB/OL].[2024-04-05].
artificial-intelligence-gaza-ai#:~:text.
⑦羅易煊、李彬:《軍用人工智能競爭中的先行者優勢》,《國際政治科學》, 2022第3期,第1-33頁。
⑧U.S. Department of Defense. Deputy Secretary of Defense
Kathleen Hicks Keynote Address: The Urgency to Innovate (As
Delivered) [EB/OL]. [2023-08-28]. https://www.defense.gov/News/Speeches/Speech/Article/3507156/deputy-
secretary-of-defense-kathleen-hicks-keynote-address-the-urgency-
to-innov/.
⑨[美]托馬斯·謝林著,毛瑞鵬譯:《軍備及其影響》,上海:上海人民出版社,2017年,第81頁。
⑩Rautenbach P. Keeping Humans in the Loop is Not Enough to
Make AI Safe for Nuclear Weapons[EB/OL],
enough-to-make-ai-safe-for-nuclear-weapons/,2023-02-16/2024-01-
09.
⑪Mayer M. The new killer drones: understanding the strategic
implications of next-generation unmanned combat aerial vehicles[J],
International Affairs, 2015,91(04):771.
⑫[美]保羅·沙瑞爾著,朱啟超、王姝、龍坤譯:《無人軍隊:自主武器與未來戰爭》,北京:世界知識出版社,2019年,第153-156頁。
中國原創軍事資源:https://paper.people.com.cn/rmlt/pc/content/202502/05/content_30058889349.html