People’s Republic of China’s Development Trend & Governance Strategy for Weaponization of Artificial Intelligence

中華人民共和國人工智慧武器化發展趨勢與治理策略

現代英語:

The weaponization of artificial intelligence is an inevitable trend in the new round of military transformation. Local wars and conflicts in recent years have further stimulated relevant countries to promote the strategic deployment of artificial intelligence weaponization and seize the commanding heights of future wars. The potential risks of artificial intelligence weaponization cannot be ignored. It may intensify the arms race and break the strategic balance; empower the combat process and increase the risk of conflict; increase the difficulty of accountability and increase collateral casualties; lower the threshold of proliferation and lead to misuse and abuse. In this regard, we should strengthen international strategic communication to ensure consensus and cooperation among countries on the military application of artificial intelligence; promote dialogue and coordination on the construction of laws and regulations to form a unified and standardized legal framework; strengthen the ethical constraints of artificial intelligence to ensure that technological development meets ethical standards; actively participate in global security governance cooperation and jointly maintain peace and stability in the international community.

    The weaponization of artificial intelligence is to apply artificial intelligence-related technologies, platforms and services to the military field, making it an important driving force for enabling military operations, thereby improving the efficiency, accuracy and autonomy of military operations. With the widespread application of artificial intelligence technology in the military field, major powers and military powers have increased their strategic and resource investment and accelerated the pace of research and development and application. The frequent regional wars and conflicts in recent years have further stimulated the battlefield application of artificial intelligence, and profoundly shaped the form of war and the future direction of military transformation.

    It cannot be ignored that, as a rapidly developing technology, AI itself may have potential risks due to the immaturity of its inherent technology, inaccurate scene matching, and incomplete supporting conditions. It is also easy to bring various risks and challenges to the military field and even the international security field due to human misuse, abuse, or even malicious use. To conscientiously implement the global security initiative proposed by General Secretary Xi Jinping, we must face the development trend of weaponization of AI worldwide, conduct in-depth analysis of the security risks that may be brought about by the weaponization of AI, and think about scientific and feasible governance ideas and measures.

    Current trends in the weaponization of artificial intelligence

    In recent years, the application of artificial intelligence in the military field is fundamentally reshaping the future form of war, changing the future combat system, and affecting the future direction of military reform. Major military powers have regarded artificial intelligence as a subversive key technology that will change the rules of future wars, and have invested a lot of resources to promote the research and development and application of artificial intelligence weapons.

    The weaponization of artificial intelligence is an inevitable trend in military transformation.

    With the rapid development of science and technology, the necessity and urgency of military reform have become increasingly prominent. Artificial intelligence can simulate human thinking processes, extend human brainpower and physical strength, realize rapid information processing, analysis and decision-making, and develop increasingly complex unmanned weapon system platforms, thus providing unprecedented intelligent support for military operations.

    First, it provides intelligent support for military intelligence reconnaissance and analysis. Traditional intelligence reconnaissance methods are constrained by multiple factors such as manpower and time, and it is difficult to effectively respond to large-scale, high-speed and high-complexity intelligence processing needs. The introduction of artificial intelligence technology has brought innovation and breakthroughs to the field of intelligence reconnaissance. In military infrastructure, the application of artificial intelligence technology can build an intelligent monitoring system to provide high-precision and real-time intelligence perception services. In the field of intelligence reconnaissance, artificial intelligence technology has the ability to process multiple “information flows” in real time, thereby greatly improving analysis efficiency. ① By using technical tools such as deep learning, it is also possible to “see the essence through the phenomenon”, dig out the deep context and causal relationship in various types of fragmented intelligence information, and quickly transform massive fragmented data into usable intelligence, thereby improving the quality and efficiency of intelligence analysis.

    Second, provide data support for combat command and decision-making. Artificial intelligence provides strong support for combat command and military decision-making in terms of battlefield situation awareness. ② Its advantage lies in the ability to perform key tasks such as data mining, data fusion, and predictive analysis. In information-based and intelligent warfare, the battlefield environment changes rapidly, and the amount of intelligence information is huge, requiring rapid and accurate decision-making responses. Therefore, advanced computer systems have become an important tool to assist commanders in managing intelligence data, making enemy situation judgments, proposing combat plan suggestions, and formulating plans and orders. Taking the US military as an example, the ISTAR (Intelligence, Surveillance, Target Identification and Tracking) system developed by Raytheon Technologies Corporation of the United States covers intelligence collection, surveillance, target identification and tracking functions, and can gather data from multiple information sources such as satellites, ships, aircraft and ground stations, and conduct in-depth analysis and processing. This not only significantly improves the speed at which commanders obtain information, but also can provide data support with the help of intelligent analysis systems, making decisions faster, more efficient and more accurate.

    Third, it provides important support for unmanned combat systems. Unmanned combat systems are a new type of weapon and equipment system that can independently complete military tasks without direct human manipulation. They mainly include intelligent unmanned combat platforms, intelligent ammunition, and intelligent combat command and control systems, and have significant autonomy and intelligent features. As a technical equipment that leads the transformation of future war forms, unmanned combat systems have become an important bargaining chip in military competition between countries. The system has achieved adaptability to different battlefield environments and combat spaces by using key technologies such as autonomous navigation, target recognition, and path planning. With the help of advanced algorithms such as deep learning and reinforcement learning, unmanned combat systems can independently complete navigation tasks and achieve precise strikes on targets. The design concept of this system is “unmanned platform, manned system”, and its essence is an intelligent extension of manned combat systems. For example, the “MQM-57 Falconer” drone developed by the US Department of Defense’s Advanced Research Projects Agency (DARPA) uses advanced artificial intelligence technology and has highly autonomous target recognition and tracking functions.

    Fourth, provide technical support for military logistics and equipment support. In the context of information warfare, the war process has accelerated, mobility has improved, and combat consumption has increased significantly. The traditional “excessive pre-storage” support model can no longer adapt to the rapidly changing needs of the modern battlefield. Therefore, higher requirements are placed on combat troops to provide timely, appropriate, appropriate, appropriate, and appropriate rapid and accurate after-sales support. As a technology with spillover and cross-integration characteristics, artificial intelligence is integrated with cutting-edge technologies such as the Internet of Things, big data, and cloud computing, allowing artificial intelligence knowledge groups, technology groups, and industrial groups to fully penetrate the military after-sales field, significantly improving the logistics equipment support capabilities.

    Major countries are planning to develop military applications of artificial intelligence.

    In order to enhance their global competitiveness in the field of artificial intelligence, major powers such as the United States, Russia, and Japan have stepped up their strategic layout for the military application of artificial intelligence. First, by updating and adjusting the top-level strategic planning in the field of artificial intelligence, they provide clear guidance for future development; second, in response to future war needs, they accelerate the deep integration of artificial intelligence technology and the military field, and promote the intelligent, autonomous, and unmanned development of equipment systems; in addition, they actively innovate combat concepts to drive combat force innovation, thereby improving combat effectiveness and competitive advantages.

    The first is to formulate a strategic plan. Based on the strategic paranoia of pursuing military hegemony, political hegemony, and economic hegemony with technological hegemony, the United States is accelerating its military intelligence process. In November 2023, the U.S. Department of Defense issued the “Data, Analysis and Artificial Intelligence Adoption Strategy”, which aims to expand the advanced capabilities of the entire Department of Defense system to gain lasting military decision-making advantages. The Russian military promulgated the “Russian Weapons and Equipment Development Outline from 2024 to 2033”, known as the “3.0 version”, which aims to provide guidance for the development of weapons and equipment in the next 10 years. The outline emphasizes the continued advancement of nuclear and conventional weapons construction, and focuses on the research of artificial intelligence and robotics technology, hypersonic weapons and other strike weapons based on new physical principles.

    The second is to develop advanced equipment systems. Since 2005, the U.S. military has released a version of the “Unmanned System Roadmap” every few years to look forward to and design unmanned system platforms in various fields such as air, ground, surface/underwater, and connect the development chain of unmanned weapons and equipment such as research and development-production-testing-training-combat-support. At present, more than 70 countries in the world can develop unmanned system platforms, and various types of drones, unmanned vehicles, unmanned ships (boats), and unmanned submarines are springing up like mushrooms after rain. On July 15, 2024, Mark Milley, former chairman of the U.S. Joint Chiefs of Staff, said in an interview with U.S. Defense News that by 2039, one-third of the U.S. military will be composed of robots. The Platform-M combat robot, the “Lancet” suicide drone, and the S70 “Hunter” heavy drone developed by the Russian army have been put into actual combat testing.

    The third is to innovate future combat concepts. The combat concept is a forward-looking study of future war styles and combat methods, which can often lead to the leapfrog development of new combat force formations and weapons and equipment. In recent years, the US military has successively proposed combat concepts such as “distributed lethality”, “multi-domain warfare” and “mosaic warfare” in an attempt to lead the development direction of military transformation. Taking “mosaic warfare” as an example, this combat concept regards various sensors, communication networks, command and control systems, weapon platforms, etc. as “mosaic fragments”. These “fragment” units, with the support of artificial intelligence technology, can be dynamically linked, autonomously planned, and collaboratively combined through network information systems to form an on-demand integrated, highly flexible, and flexible killing network. In March 2022, the US Department of Defense released the “Joint All-Domain Command and Control (JADC2) Strategic Implementation Plan”, which aims to expand multi-domain operations to all-domain operations concepts, connect sensors of various services to a unified “Internet of Things”, and use artificial intelligence algorithms to help improve combat command decisions. ③

    War conflicts stimulate the weaponization of artificial intelligence.

    In recent years, local conflicts such as the Libyan conflict, the Nagorno-Karabakh conflict, the Ukrainian crisis, and the Israeli-Kazakh conflict have continued, further stimulating the development of the weaponization of artificial intelligence.

    In the Libyan conflict, the warring parties used various types of drones to perform reconnaissance and combat missions. According to a report released by the United Nations Panel of Experts on Libya, the Turkish-made Kargu-2 drone carried out a “hunt and engage remotely” operation in Libya in 2020, and could autonomously attack retreating enemy soldiers. This incident marked the first use of lethal autonomous weapon systems in actual combat. As American scholar Zachary Cullenborn said, if someone unfortunately died in such an autonomous attack, this would most likely be the first known example in history of artificial intelligence autonomous weapons being used for killing. In the 2020 Nagorno-Karabakh conflict, Azerbaijan used a formation of Turkish-made “Flagship” TB2 drones and Israeli-made “Harop” drones to successfully break through the Armenian air defense system and gain air superiority and initiative on the battlefield. ④ The remarkable results of the Azerbaijani army’s drone operations are largely due to the Armenian army’s “underestimation of the enemy” mentality and insufficient understanding of the importance and threat of drones in modern warfare. Secondly, from the perspective of offensive strategy, the Azerbaijani army has made bold innovations in drone warfare. They flexibly use advanced equipment such as reconnaissance and strike drones and cruise missiles, which not only improves combat efficiency, but also greatly enhances the suddenness and lethality of combat. ⑤

    During the Ukrainian crisis that broke out in 2022, both Russia and Ukraine widely used military-grade and commercial drones to perform reconnaissance, surveillance, artillery targeting and strike missions. The Ukrainian army used the “Flagship” TB2 drone and the “Switchblade” series of suicide drones assisted by the United States to carry out precision strikes and efficient killings, becoming a “battlefield killer” that attracted worldwide attention. In the Israeli-Kazakhstan conflict, the Israeli military was accused of using an artificial intelligence system called “Lavender” to identify and lock bombing targets in Gaza. It once marked as many as 37,000 Palestinians in Gaza as suspected “militants” and identified them as targets that could be directly “assassinated”. The Israeli military’s actions have attracted widespread attention and condemnation from the international community. ⑥

    Security risks posed by weaponization of artificial intelligence

    From automated command systems to intelligent unmanned combat platforms, to intelligent decision-making systems in network defense, the application of artificial intelligence technology in the military field is becoming more and more common and has become an indispensable part of modern warfare. However, with the trend of weaponization of artificial intelligence, its misuse, abuse and even malicious use will also bring risks and challenges to international security that cannot be ignored.

    Intensify the arms race and disrupt the strategic balance.

    In the information and intelligent era, the disruptive potential of artificial intelligence is hard for major military powers to resist. They are all focusing on the development and application of artificial intelligence military capabilities, fearing that they will fall behind in this field and lose strategic opportunities. Deepening the military application of artificial intelligence can gain “asymmetric advantages” at a lower cost and with higher efficiency.

    First, countries are scrambling to seize the “first mover advantage”. When a country achieves technological leadership in the development of intelligent weapon systems, it means that the country has more advanced artificial intelligence and related application capabilities, giving it a first-mover advantage in weapon system development, control, and emergency response. This advantage includes higher autonomy, intelligence, and adaptability, which increases the country’s military strength and strategic competitive advantage. At the same time, the military advantage of the first mover may become a security threat to competitors, leading to a scramble among countries in the military application of advanced technologies. ⑦ In August 2023, US Deputy Secretary of Defense Kathryn Hicks announced the “Replicator initiative”, which seeks to deploy thousands of “autonomous weapon systems” in the Indo-Pacific region in less than two years. ⑧

    Second, the opacity of AI armament construction in various countries may intensify the arms race. There are two main reasons for this: First, AI technology is an “enabling technology” that can be used to design a variety of applications, which means that it is difficult to verify the specific situation of AI military applications. It is difficult to determine whether a country is developing or deploying nuclear weapons by monitoring uranium, centrifuges, weapons and delivery systems, as is the case with nuclear weapons. The difference between semi-autonomous and fully autonomous weapon systems is mainly due to different computer software algorithms, and it is difficult to verify the implementation of treaties by various countries through physical verification. Second, in order to maintain their strategic advantages, countries often take confidentiality measures for the details of the military application of advanced technologies, so that opponents cannot detect their strategic intentions. In the current international environment, this opacity not only intensifies the arms race, but also lays the groundwork for future escalation of conflicts.

    Third, the uncertainty of the strategic intentions of various countries will also intensify the arms race. The impact of artificial intelligence on strategic stability, nuclear deterrence and war escalation depends largely on other countries’ perception of its capabilities rather than its actual capabilities. As American scholar Thomas Schelling pointed out, international relations often have the characteristics of risk competition, which is more of a test of courage than force. The relationship between major opponents is determined by which side is ultimately willing to invest more power, or make it look like it is about to invest more power. ⑨ An actor’s perception of the capabilities of others, whether true or false, will greatly affect the progress of the arms race. If a country vigorously develops intelligent weapon systems, competitors will become suspicious of their competitors’ armament capabilities and intentions to develop armaments without being sure of the other party’s intentions, and often take reciprocal measures, that is, to meet their own security needs by developing armaments. It is this ambiguity of intention that stimulates technological accumulation, exacerbates the instability of weapons deployment, and ultimately leads to a vicious cycle.

    Empowering operational processes increases the risk of conflict.

    Empowered by big data and artificial intelligence technologies, traditional combat processes will be rebuilt intelligently, that is, from “situational awareness – command decision-making – attack and defense coordination – comprehensive support” to “intelligent cognition of global situation – human-machine integrated hybrid decision-making – manned/unmanned autonomous coordination – proactive on-demand precise support”. However, although the intelligent reconstruction of combat processes has improved the efficiency and accuracy of operations, it has also increased the risk of conflict and misjudgment.

    First, wars that break out at “machine speed” will increase the risk of hasty actions. Artificial intelligence weapon systems have demonstrated strong capabilities in accuracy and response speed, making future wars break out at “machine speed”. ⑩ However, too fast a war will also increase the risk of conflict. In areas such as missile defense, autonomous weapon systems, and cyberspace that value autonomy and response speed, faster response speeds will bring huge strategic advantages, but will also greatly compress the time window for the defender to respond to military actions, causing combat commanders and decision makers to be under tremendous “time pressure”, exacerbating the risk of “hasty action” and increasing the possibility of accidental escalation of crises.

    Second, reliance on system autonomy may increase the chance of misjudgment under pressure. The U.S. Department of Defense believes that “highly autonomous artificial intelligence systems can autonomously select and execute corresponding operations based on the dynamic changes in mission parameters, and efficiently achieve human preset goals. The increase in autonomy not only greatly reduces dependence on manpower and improves overall operational efficiency, but is also regarded by defense planners as a key factor in maintaining tactical leadership and ensuring battlefield advantage.” ⑪ However, since human commanders cannot respond quickly enough, they may gradually delegate control to autonomous systems, increasing the chance of misjudgment. In March 2003, the U.S. Patriot missile system mistakenly marked a friendly Tornado fighter as an anti-radiation missile. The commander chose to launch the missile under the pressure of only a few seconds to react, resulting in the death of two pilots. ⑫

    Third, it weakens the effectiveness of the crisis termination mechanism. During the Cold War, the United States and the Soviet Union led the construction of a series of restrictive measures to curb the escalation of crises and prevent them from evolving into large-scale nuclear wars. In these measures, humans play a vital role as “supervisors”. When risks may get out of control, they can initiate termination measures in sufficient time to avoid large-scale humanitarian disasters. However, with the improvement of the computing power of artificial intelligence systems and their deep integration with machine learning, combat responses have become faster, more precise and destructive, and humans’ termination intervention mechanism for crises may be weakened.

    War accountability is difficult and collateral casualties increase.

    Artificial intelligence weapon systems make it more difficult to define responsibility for war. In traditional combat modes, weapons systems are controlled by humans. Once errors or crises occur, human operators or developers of operating systems will bear corresponding responsibilities. Artificial intelligence technology itself weakens human initiative and control capabilities, making the attribution of responsibility for technical behavior unclear.

    The first is the problem of the “black box” of artificial intelligence. Although artificial intelligence has significant advantages in processing and analyzing data, its internal operating rules and causal logic are often difficult for humans to understand and explain, which makes it difficult for programmers to correct errors in the algorithm. This problem is often referred to as the “black box” of the algorithm model. Once the artificial intelligence weapon system poses a safety hazard, the “algorithm black box” may become a rational excuse for the relevant responsible parties to shirk responsibility. Those who pursue responsibility can only face generalized shirking and shirking of responsibility, and point the finger of responsibility at the artificial intelligence weapon system. In practice, if the decision-making process of artificial intelligence cannot be understood and explained, it may cause a series of problems, such as decision-making errors, trust crises, and information abuse.

    The second is the division of responsibilities between humans and machines in military operations. When an AI system fails or makes a wrong decision, should it be considered an independent entity to bear responsibility? Or should it be considered a tool, with human operators bearing all or part of the responsibility? The complexity of this division of responsibilities lies not only in the technical level, but also in the ethical and legal levels. On the one hand, although AI systems can make autonomous decisions, their decision-making process is still limited by human preset procedures and algorithms, so their responsibilities cannot be completely independent of humans. On the other hand, AI systems may go beyond the preset scope of humans and make independent decisions in some cases. How to define their responsibilities at this time has also become a difficult problem in the field of arms control.

    The third is the issue of the allocation of decision-making power between humans and artificial intelligence weapon systems. According to the different autonomous powers of the machine, the artificial intelligence system can perform tasks in three decision-making and control modes: semi-autonomous, supervised autonomous, and fully autonomous. In a semi-autonomous system, the decision-making power of the action is controlled by humans; in supervised autonomous actions, humans supervise and intervene when necessary; in fully autonomous actions, humans do not participate in the action process. With the gradual deepening of the military application of artificial intelligence, the role of humans in the combat system is undergoing a gradual transformation from the traditional “man in the loop” mode to the “man on the loop”, and humans have evolved from direct operators inside the system to supervisors outside the system. However, this transformation has also raised new problems. How to ensure that artificial intelligence weapon systems can still follow human ethics and values ​​when operating independently is a major challenge facing the current field of artificial intelligence weapon research and development.

    Lowering the threshold for proliferation leads to misuse and abuse.

    Traditional strategic competition usually involves large-scale research and development and procurement of weapons systems, which requires a lot of money and technical support. After AI technology matures and spreads, it has the advantages of being easy to obtain and inexpensive. Even small and medium-sized countries may have the ability to develop advanced intelligent weapon systems. At present, strategic competition in the field of military AI is mainly concentrated between major military powers such as the United States and Russia. However, in the long run, the spread of AI technology will expand the scope of strategic competition and pose a destructive threat to the existing strategic balance. Once smaller countries that master AI technology have relatively strong competitiveness, their willingness to initiate confrontation when facing threats from major powers may increase.

    First, artificial intelligence helps develop some lightweight and agile means of warfare, thereby encouraging some small and medium-sized countries or non-state actors to use it to carry out small, opportunistic military adventures, achieving their strategic goals at a lower cost and with more abundant channels. Second, the rapid development of artificial intelligence has made new forms of warfare such as cyber warfare and electronic warfare increasingly prominent. In a highly competitive battlefield environment, malicious third-party actors can influence military planning and strategic deterrence by manipulating information, leading to an escalation of the situation. In the Ukrainian crisis that broke out in 2022, a lot of false information was spread on the Internet to confuse the public. Third, the widespread application of artificial intelligence technology has also reduced strategic transparency. Traditional military strategies often rely on a large amount of intelligence collection, analysis and prediction, and with the assistance of artificial intelligence technology, combat planning and decision-making processes have become more complex and unpredictable. This opacity may lead to misunderstandings and misjudgments, thereby increasing the risk of escalating conflicts.

    Governance Path for Security Risks of Weaponized Artificial Intelligence

    To ensure the safe development of artificial intelligence and avoid the potential harm caused by its weaponization, we should strengthen international communication on governance strategies, seek consensus and cooperation among countries on the military application of artificial intelligence; promote dialogue and coordination on laws and regulations to form a unified and standardized legal framework; strengthen the constraints on artificial intelligence ethics to ensure that technological development complies with ethical standards; and actively participate in global security governance cooperation to jointly maintain peace and stability in the international community.

    Attach great importance to strategic communication at the international level.

    AI governance is a global issue that requires the concerted efforts of all countries to solve. On the international stage, countries have both mixed and conflicting interests. Therefore, dealing with global issues through effective communication channels has become the key to maintaining world peace and development.

    On the one hand, we need to accurately grasp the challenges of international governance of AI. We need to grasp the consensus of various countries on the development of weaponized AI, pay close attention to the policy differences among countries in the security governance of weaponized AI applications, and coordinate relevant initiatives with the UN agenda through consultation and cooperation, so as to effectively prevent the military abuse of AI and promote the use of AI for peaceful purposes.

    On the other hand, governments should be encouraged to reach relevant agreements and establish strategic mutual trust through official or semi-official dialogues. Compared with the “Track 1 Dialogue” at the government level, the “Track 1.5 Dialogue” refers to dialogues between government officials and civilians, while the “Track 2 Dialogue” is a non-official dialogue between scholars, retired officials, etc. These two forms of dialogue have higher flexibility and are important supplements and auxiliary means to official dialogues between governments. Through a variety of dialogue and communication methods, officials and civilians can widely discuss possible paths to arms control, share experiences and expertise, and avoid the escalation of the arms race and the deterioration of tensions. These dialogue mechanisms will provide countries with a continuous communication and cooperation platform, help enhance mutual understanding, strengthen strategic mutual trust, and jointly respond to the challenges brought about by the militarization of artificial intelligence.

    Scientifically formulate laws and ethical norms for artificial intelligence.

    Artificial intelligence technology itself is neither right nor wrong, good nor evil, but there are differences in good and bad intentions in the design, development, manufacturing, use, operation and maintenance of artificial intelligence. The weaponization of artificial intelligence has aroused widespread ethical concerns. Under the framework of international law, can autonomous weapon systems accurately distinguish between combatants and civilians on a complex battlefield? In addition, if artificial intelligence weapon systems cause unexpected harm, how to define the responsibility? Is it in line with moral and ethical standards to give machines the decision-making power of life and death? These concerns highlight the need to strengthen the ethical constraints of artificial intelligence.

    On the one hand, we must insist on ethics first and integrate the concept of “intelligent for good” from the source of technology. In the design process of artificial intelligence military systems, values ​​such as people-oriented and intelligent for good will be embedded in the system. The purpose is to eliminate the indiscriminate killing and injury that may be caused by artificial intelligence from the source, control its excessive lethality, and prevent accidental damage, so as to limit the damage caused by artificial intelligence weapon systems to the smallest possible range. At present, nearly 100 institutions or government departments at home and abroad have issued various artificial intelligence ethical principle documents, and academia and industry have also reached a consensus on the basic ethical principles of artificial intelligence. In 2022, China’s “Position Paper on Strengthening the Ethical Governance of Artificial Intelligence” submitted to the United Nations provided an important reference for the development of global artificial intelligence ethical supervision. The document clearly emphasizes that artificial intelligence ethical supervision should be promoted through institutional construction, risk control, collaborative governance and other measures.

    On the other hand, we need to improve relevant laws and regulations and clarify the boundaries of rights and responsibilities of AI entities. We need to formulate strict technical review standards to ensure the security and reliability of AI systems. We need to conduct comprehensive tests before AI systems go online to ensure that they do not have a negative impact on human life and social order. We need to clarify the legal responsibilities of developers, users, maintainers and other parties throughout the life cycle of AI systems, and establish corresponding accountability mechanisms.

    Pragmatically participate in international cooperation on artificial intelligence security governance.

    The strategic risks brought about by the military application of artificial intelligence further highlight the importance of pragmatic cooperation in international security. It is recommended to focus on three aspects:

    First, promote the formulation of guidelines for the use of artificial intelligence in the military field. Formulating a code of conduct for the military application of artificial intelligence is an important responsibility of all countries to regulate the military application of artificial intelligence, and it is also a necessary measure to promote international consensus and comply with international laws and regulations. In 2021, the Chinese government submitted the “China’s Position Paper on Regulating the Military Application of Artificial Intelligence” to the United Nations Convention on Certain Conventional Weapons Conference, and issued the “Global Artificial Intelligence Governance Initiative” in 2023. These have provided constructive references for improving the code of conduct for regulating the military application of artificial intelligence.

    The second is to establish an applicable regulatory framework. The dual-use nature of AI involves many stakeholders. Some non-state actors, such as non-governmental organizations, technology communities, and technology companies, will play a more prominent role in the global governance of AI and become an important force in the construction of a regulatory framework for the military application of AI. The technical regulatory measures that countries can take include: clarifying the scope of use of AI technology, responsible entities, and penalties for violations; strengthening technology research and development to improve the security and controllability of technology; establishing a regulatory mechanism to supervise the development and application of technology throughout the process, and promptly discover and solve problems.

    Third, jointly develop AI security prevention technologies and solutions. Encourage bilateral or multilateral negotiations between governments and militaries to be included in the dialogue options for military AI applications, conduct extensive exchanges on military AI security prevention technologies, operating procedures and practical experience, promote the sharing and reference of relevant risk management technical standards and usage specifications, and continuously inject new stability factors into the international security mutual trust mechanism under the background of AI militarization.

    (The author is the director, researcher, and doctoral supervisor of the National Defense Science and Technology Strategic Research Think Tank of the National University of Defense Technology; Liu Hujun, a master’s student at the School of Foreign Languages ​​of the National University of Defense Technology, also contributed to this article)

現代國語:

【摘要】人工智慧武器化是新一輪軍事變革的必然趨勢,近年來局部戰爭的衝突進一步刺激相關國家推動人工智慧武器化戰略部署,搶佔未來戰爭制高點。人工智慧武器化的潛在風險不容忽視,將可能加劇軍備競賽,打破戰略平衡;賦能作戰流程,加大衝突風險;提升問責難度,增加附帶傷亡;降低擴散門檻,導致誤用濫用。對此,應加強國際間戰略溝通,確保各國在人工智慧軍事應用上的共識與協作;推動法律法規建設的對話與協調,以形成統一規範的法律架構;加強人工智慧倫理約束,確保技術發展符合道德標準;積極參與全球安全治理合作,共同維護國際社會的和平與穩定。

【關鍵字】人工智慧 軍事應用 安全風險 安全治理 【中圖分類號】F113 【文獻識別碼】A

人工智慧武器化,是將人工智慧相關技術、平台與服務應用到軍事領域,使其成為賦能軍事行動的重要驅動力量,進而提升軍事行動的效率、精準度與自主性。隨著人工智慧技術在軍事領域的廣泛應用,各主要大國及軍事強國紛紛加大戰略與資源投入,加速研發應用步伐。近年來頻繁的區域戰爭衝突也進一步刺激了人工智慧的戰場運用,並深刻塑造戰爭形態以及軍事變革的未來走向。

不容忽視的是,人工智慧作為一類快速發展中的技術,其本身由於內在技術的不成熟、場景匹配的不準確、支持條件的不完備,可能存在潛在風險,而由於人為的誤用、濫用甚至惡意使用,也容易給軍事領域乃至國際安全領域帶來多種風險挑戰。認真貫徹實習近平總書記提出的全球安全倡議,必須直面世界範圍內人工智慧武器化的發展趨勢,深入分析人工智慧武器化應用可能帶來的安全風險,並思考科學可行的治理思路與舉措。

當前人工智慧武器化的發展趨勢

近年來,人工智慧在軍事領域的應用,正從根本上重塑未來戰爭形態、改變未來作戰體系,影響軍事變革的未來走向。主要軍事大國已將人工智慧視為改變未來戰爭規則的顛覆性關鍵技術,紛紛挹注大量資源,並推動人工智慧武器的研發與應用。

人工智慧武器化是軍事變革的必然趨勢。

隨著科學技術的快速發展,軍事變革的必要性與緊迫性愈發凸顯。人工智慧透過模擬人類的思考過程,延展人類的腦力與體力,可實現資訊快速處理、分析與決策,研發日益複雜的無人化武器系統平台,進而為軍事行動提供前所未有的智慧化支援。

一是為軍事情報偵察與分析提供智慧支援。傳統的情報偵察方式受到人力和時間等多重因素制約,難以有效應對大規模、高速度和高複雜度的情報處理需求。人工智慧技術的引進,為情報偵察領域帶來革新和突破。在軍事基礎設施中,應用人工智慧技術,可建構智慧監測系統,提供高精度即時的情報感知服務。在情報偵察領域,人工智慧技術具備對多個「資訊流」進行即時處理的能力,從而大大提高分析效率。 ①透過使用深度學習等技術工具,還可以“透過現像看本質”,挖掘出各類碎片化情報信息中的深層脈絡與因果聯繫,將海量碎片化數據快速轉變為可以利用的情報,從而提升情報分析的質效。

二是為作戰指揮與決策提供資料支援。人工智慧在戰場態勢感知方面為作戰指揮和軍事決策提供有力支援。 ②其優點在於能夠進行資料探勘、資料融合以及預測分析等關鍵任務。在資訊化智能化戰爭中,戰場環境瞬息萬變,情報資訊量龐大,要求決策反應迅速且準確。因此,先進的電腦系統就成為協助指揮人員管理情報資料、進行敵情判斷、提出作戰方案建議以及擬制計畫與命令的重要工具。以美軍為例,美國雷神科技公司(Raytheon Technologies Corporation)研發的ISTAR(情報、監視、目標辨識與追蹤)系統,涵蓋了情報採集、監視、目標辨識及追蹤功能,可匯聚來自衛星、艦船、飛機及地面站等多元資訊來源的數據,並對其進行深度分析與處理。這不僅顯著提高了指揮官獲取資訊的速度,而且可藉助智慧分析系統提供數據支持,使決策更加快速、高效和精準。

第三是為無人作戰系統提供重要支撐。無人作戰系統是一種無需人類直接操縱,便可獨立完成軍事任務的新型武器裝備系統,主要包括智慧化無人作戰平台、智慧化彈藥和智慧化作戰指揮控制系統等組成部分,具備顯著的自主性和智慧化特徵。無人作戰系統,作為引領未來戰爭形態變革的技術裝備,成為國家間軍事競爭的重要籌碼。該系統透過運用自主導航、目標辨識、路徑規劃等關鍵技術,實現了不同戰場環境及作戰空間的適應能力。透過深度學習、強化學習等先進演算法,無人作戰系統能夠獨立完成導航任務,並實現精準打擊目標。這種系統的設計理念是“平台無人,系統有人”,其本質是對有人作戰系統的智慧化延伸。例如,美國國防部高級研究計畫局(DARPA)研發的「MQM-57獵鷹者」無人機,就採用了先進的人工智慧技術,具備高度自主的目標識別和追蹤功能。

四是為軍事後勤與裝備保障提供技術支援。在資訊化戰爭的背景下,戰爭進程加快、機動性提升、作戰消耗顯著增加。傳統的「超量預儲」保障模式已無法適應現代戰場快速變化的需求,因此,對作戰部隊進行適時、適地、適需、適量的快速精確後裝保障提出了更高的要求。人工智慧作為一種具有溢出帶動和交叉融合特性的技術,與物聯網、大數據、雲端運算等尖端技術相互融合,使得人工智慧知識群、技術群和產業群全面滲透到軍事後裝領域,顯著提升了後勤裝備保障能力。

主要國家紛紛佈局人工智慧軍事應用。

為增強人工智慧領域的全球競爭力,美國、俄羅斯、日本等主要大國加緊人工智慧軍事應用的戰略佈局。首先,透過更新和調整人工智慧領域的頂層策略規劃,為未來的發展提供明確指導;其次,針對未來戰爭需求,加速人工智慧技術與軍事領域的深度融合,推動裝備系統的智慧化、自主化和無人化發展;此外,積極創新作戰概念,以驅動作戰力量創新,進而提升作戰效能和競爭優勢。

一是製定戰略規劃。基於技術霸權追求軍事霸權、政治霸權、經濟霸權的戰略偏執,美國正加速自體軍事智慧化進程。 2023年11月,美國國防部發布《數據、分析與人工智慧採用戰略》,旨在擴展整個國防部體系的先進能力,以獲得持久的軍事決策優勢。俄軍頒布被稱為「3.0版本」的《2024年至2033年俄羅斯武器裝備發展綱要》,旨在為未來10年武器裝備發展提供指導,綱要強調繼續推進核武器和常規武器建設,並重點研究人工智慧和機器人技術、高超音速武器和其他基於新物理原理的打擊兵器。

二是研發先進裝備系統。美軍自2005年開始每隔幾年都會發布一版“無人系統路線圖”,以展望並設計空中、地面、水面/水下等各領域無人系統平台,貫通研發—生產—測試—訓練—作戰—保障等無人化武器裝備發展鏈路。目前,世界上已有70多個國家可以研發無人化系統平台,各種類型的無人機、無人車、無人船(艇)、無人潛航器如雨後春筍般不斷出現。 2024年7月15日,美軍參會前主席馬克‧米利接受《美國防務新聞》採訪時稱,到2039年,三分之一的美軍部隊將由機器人組成。俄軍研發的平台-M作戰機器人、「柳葉刀」自殺式無人機和S70「獵人」重型無人機等,已投入實戰檢驗。

三是創新未來作戰概念。作戰概念是對未來戰爭樣式與作戰方式進行的前瞻性研究,往往可牽引新的作戰力量編組及武器裝備跨越發展。美軍近年來提出「分散式殺傷」「多域戰」「馬賽克戰」等作戰概念,試圖引領軍事變革的發展方向。以“馬賽克戰”為例,該作戰概念將各種感測器、通訊網路、指揮控制系統、武器平台等視為“馬賽克碎片”,這些“碎片”單元在人工智慧技術賦能支援下,透過網路資訊系統可動態連結、自主規劃、協同組合,從而形成一個按需整合、極具彈性、靈活機動的殺傷網。 2022年3月,美國國防部發布《聯合全域指揮控制(JADC2)戰略實施計畫》,該計畫旨在將多域作戰向全局作戰概念拓展,將各軍種感測器連接到一個統一「物聯網」中,利用人工智慧演算法幫助改善作戰指揮決策。 ③

戰爭衝突刺激人工智慧武器化進程。

近年來,利比亞衝突、納卡衝突、烏克蘭危機、哈以衝突等局部衝突不斷,進一步刺激了人工智慧武器化的發展進程。

在利比亞衝突中,交戰雙方採用多種型號無人機執行偵察和作戰任務。根據聯合國利比亞問題專家小組發布的報告指出,土耳其製造的「卡古-2」(Kargu-2)無人機2020年在利比亞執行了「追捕並遠程交戰」行動,可自主攻擊撤退中的敵方士兵。這事件標誌著致命性自主武器系統在實戰中的首次運用。如美國學者扎卡里·卡倫伯恩所述,若有人在此類自主攻擊中不幸喪生,這極有可能是歷史上首個已知的人工智慧自主武器被用於殺戮的例子。在2020年納卡衝突中,阿塞拜疆運用土耳其生產的「旗手」TB2無人機編隊和以色列生產的「哈洛普」無人機成功突破了亞美尼亞防空系統,掌握了戰場制空權和主動權。 ④ 阿塞拜疆軍隊無人機作戰的顯著成效,在很大程度上源於亞美尼亞軍隊的「輕敵」心態,對無人機在現代戰爭中的重要性和威脅性認識不足。其次,從進攻策略的角度來看,阿塞拜疆軍隊在無人機戰法上進行了大膽的創新。他們靈活運用察打一體無人機和巡彈等先進裝備,不僅提升了作戰效率,也大大增強了戰鬥的突然性和致命性。 ⑤

在2022年爆發的烏克蘭危機中,俄羅斯和烏克蘭都廣泛使用軍用級和商用無人機執行偵察監視、火砲瞄準和打擊任務。烏克蘭軍隊透過使用「旗手」TB2無人機以及美國援助的「彈簧刀」系列自殺式無人機,實施精準打擊和高效殺傷,成為令世界矚目的「戰場殺手」。在哈以衝突中,以色列軍方被指控使用名為「薰衣草」(Lavender)的人工智慧系統來識別並鎖定加薩境內的轟炸目標,曾將多達3.7萬名加薩巴勒斯坦人標記為「武裝分子」嫌疑對象,並將其認定為可直接「暗殺」的目標,以軍事行動引發了國際社會廣泛關注和譴責對象。 ⑥

人工智慧武器化帶來的​​安全風險

從自動化指揮系統到智慧無人作戰平台,再到網路防禦中的智慧決策系統,人工智慧技術在軍事領域的應用正變得愈發普遍,已成為現代戰爭不可或缺的一部分。然而,在人工智慧武器化的趨勢下,其誤用、濫用甚至惡意使用,也將為國際安全帶來不可忽視的風險挑戰。

加劇軍備競賽,打破戰略平衡。

在資訊化智能化時代,人工智慧所具有的顛覆潛力讓軍事大國都難以抗拒,紛紛聚焦人工智慧軍事能力的開發與運用,唯恐在這一領域落後而喪失戰略機會。深化人工智慧軍事應用,則能夠以更低成本、更高效率的方式獲得「非對稱優勢」。

一是各國紛紛搶抓「先行者優勢」。當一個國家在智慧武器系統開發領域取得技術領先地位時,意味著該國具備更高階的人工智慧和相關應用能力,使其在武器系統開發、控制和緊急應變等方面具有先發優勢。這種優勢包括更高的自主性、智慧化程度和自適應能力,從而增加了該國的軍事實力和戰略競爭優勢。同時,先行者的軍事優勢可能會成為競爭對手的安全威脅,導致各國在先進技術的軍事應用上呈現出你爭我趕的態勢。 ⑦ 2023年8月,美國國防部副部長凱瑟琳·希克斯宣布了「複製者計畫」(Replicator initiative),該倡議力求在不到兩年的時間內在印太地區部署數千個「自主武器系統」。 ⑧

二是各國人工智慧軍備建設的不透明性可能加劇軍備競賽。這主要有兩個方面的原因:一是人工智慧技術是一種可用於設計多種應用的“使能技術”,這意味著人工智能軍事應用具體情況核查難度較高,難以像核武器可以通過對鈾、離心機以及武器和運載系統的監測來判斷一個國家是否在進行核武器的開發或部署。半自主、完全自主武器系統之間的差異主要是由於電腦軟體演算法不同導致的,很難透過物理核查手段來對各國的條約執行情況進行核查。二是各國為了維持己方的戰略優勢,往往對先進技術的軍事應用相關細節採取保密措施,使對手無法探知其戰略意圖。在當前國際環境中,這種不透明性不僅加劇了軍備競賽,更為未來衝突升級埋下了伏筆。

三是各國戰略意圖的不確定性也會加劇軍備競賽。人工智慧對於戰略穩定、核威懾和戰爭升級的影響,很大程度上取決於他國對其能力的感知,而非其實質能力。正如美國學者托馬斯·謝林指出,國際關係常常具有風險競爭的特徵,更多的是對勇氣而不是武力的考驗,主要對手之間的關係是由哪一方最終願意投入更大的力量,或使之看起來即將投入更大的力量來決定的。 ⑨ 一個行為體對於他者能力的感知,無論真假,都會在很大程度上影響軍備競賽進程。如果一個國家大力發展智慧武器系統,競爭對手在不確定對方意圖的情況下,會對競爭對手的軍備能力及發展軍備的意圖產生猜忌,往往採取對等措施,即透過發展軍備來滿足自身安全需求。正是這種意圖的模糊性刺激了技術積累,加劇武器部署的不穩定性,最終導致惡性循環。

賦能作戰流程,增加衝突風險。

在大數據與人工智慧技術賦能下,傳統作戰流程將實現智慧化再造,即由「態勢感知—指揮決策—攻防協同—綜合保障」轉向「全局態勢智慧認知—人機一體混合決策—有人/無人自主協同—主動按需精準保障」轉變。然而,作戰流程的智慧化再造雖然提高了作戰的效率和精準性,但也提升了衝突和誤判的風險。

一是以「機器速度」爆發的戰爭將增加倉促行動的風險。人工智慧武器系統在精確度和反應速度上表現出強大的能力,使得未來戰爭將以「機器速度」爆發。 ⑩ 但戰爭速度過快也將增加衝突風險。在飛彈防禦、自主武器系統和網路空間等重視自主性以及反應速度的領域,更快的反應速度將帶來巨大的戰略優勢,同時也極大地壓縮了防禦方對軍事行動作出反應的時間窗口,導致作戰指揮官和決策者置身於巨大的「時間壓力」之下,加劇了「倉促行動」的風險,並增加了危機意外升級的可能性。

二是依賴系統自主性可能增加壓力下的誤判幾率。美國國防部認為,「高度自主化的人工智慧系統,能夠根據任務參數的動態變化,自主選擇並執行相應操作,高效實現人類預設的目標。自主性的增加不僅大幅減少了對人力的依賴,提高了整體操作效率,更被國防規劃者視為保持戰術領先、確保戰場優勢的關鍵要素。」⑪然而,由於人類指揮官無法做出足夠快的決定權,可能會逐漸增加自己。 2003年3月,美國「愛國者」飛彈系統曾錯誤地將友軍的「龍捲風」戰鬥機標記為反輻射飛彈,指揮人員在只有幾秒鐘反應時間的壓力狀態下,選擇發射飛彈,造成了兩名飛行員的死亡。 ⑫

三是削弱了危機終止機制的有效性。冷戰時期,美蘇主導建構了一系列限制性措施來遏止危機的升級,避免其演變為大規模的核戰。在這些措施中,人類扮演著至關重要的「監督者」角色,在可能出現風險失控時,能夠在充足的時間內啟動終止措施,避免大規模人道災難發生。但是,隨著人工智慧系統運算能力的提升及其與機器學習的深度融合,作戰反應變得更為迅捷、精確和具有破壞性,人類對於危機的終止幹預機制將可能被削弱。

戰爭問責困難,增加附帶傷亡。

人工智慧武器系統使得戰爭責任更難界定。在傳統作戰模式下,由人類控制武器系統,一旦造成失誤或危機,人類操作員或作業系統的研發者將承擔相應的責任。人工智慧技術本身弱化了人類的能動性和控制能力,致使技術性行為的責任歸屬變得模糊不清。

一是人工智慧「黑箱」問題。儘管人工智慧在處理和分析資料方面有著顯著優勢,但是其內部運作規律和因果邏輯卻常常難以被人類理解和解釋,這使得程式設計師難以對錯誤演算法進行糾偏除誤,這一問題常常被稱為演算法模型的「黑盒子」。一旦人工智慧武器系統產生安全危害,「演算法黑箱」可能成為相關責任方推卸責任的合理化藉口,追責者只能面臨泛化的卸責與推諉,並將責任矛頭指向人工智慧武器系統。在實踐中,如果無法理解並解釋人工智慧的決策過程,可能會引發一系列的問題,如決策失誤、信任危機、資訊濫用等。

二是軍事行動中人機責任劃分問題。當人工智慧系統出現故障或決策失誤時,是否應將其視為一種獨立的實體來承擔責任?或者,是否應該將其視為一種工具,由人類操作者承擔全部或部分責任?這種責任劃分的複雜性不僅在於技術層面,更在於倫理和法律層面。一方面,人工智慧系統雖然能夠自主決策,但其決策過程仍受到人類預設的程式和演算法限制,因此其責任無法完全獨立於人類之外。另一方面,人工智慧系統在某些情況下可能會超越人類的預設範圍,做出獨​​立的決策,此時其責任又該如何界定,也成為軍控領域的難題。

三是人與人工智慧武器系統的決策權分配問題。依照機器自主權限的不同,人工智慧系統能夠以半自主、有監督式自主以及完全自主三種決策與控制方式執行任務。在半自主系統中,行動的決策權由人類掌控;在有監督式自主行動中,人類實施監督並在必要時幹預;在完全自主行動中,人類不參與行動過程。隨著人工智慧軍事應用程度的逐漸加深,人類在作戰系統中的角色正經歷由傳統的「人在迴路內」模式逐步向「人在迴路」轉變,人類從系統內部的直接操控者演化為系統外部的監督者。然而,這項轉變也引發了新的問題。如何確保人工智慧武器系統在獨立運作時仍能遵循人類倫理和價值觀,這是當前人工智慧武器研發領域面臨的重大挑戰。

降低擴散門檻,導致誤用濫用。

傳統的戰略競爭通常涉及大規模的武器系統研發和採購,需要大量資金和技術支援。人工智慧技術成熟擴散後,具有易取得且價格低廉等優勢,即便是中小國家也可能具備開發先進智慧武器系統的能力。目前,軍用人工智慧領域的戰略競爭主要集中在美俄等軍事大國之間。但長遠來看,人工智慧技術的擴散將擴大戰略競爭的範圍,對現有的戰略平衡構成破壞性威脅。一旦掌握人工智慧技術的較小規模國家擁有相對較強的競爭力,這些國家在面臨大國威脅時發起對抗的意願可能就會增強。

一是人工智慧有助於發展一些輕便靈巧的作戰手段,從而鼓勵一些中小國家或非國家行為體利用其開展小型的、機會主義的軍事冒險,以更低廉的成本和更豐富的途徑來達到其戰略目地。二是人工智慧的快速發展使得網路戰、電子戰等新型戰爭形態日益凸顯。在競爭激烈的戰場環境中,惡意的第三方行為體可以透過操縱資訊來影響軍事規劃和戰略威懾,導致局勢升級。在2022年爆發的烏克蘭危機中,就有眾多網路假訊息傳播混淆視聽。三是人工智慧技術的廣泛應用也降低了戰略透明度。傳統的軍事戰略往往依賴大量的情報收集、分析和預測,而在人工智慧技術的輔助下,作戰計畫和決策過程變得更加複雜和難以預測。這種不透明性可能導致誤解和誤判,增加了衝突升級的風險。

人工智慧武器化安全風險的治理路徑

為確保人工智慧安全發展,避免其武器化帶來的​​潛在危害,應加強國際間的治理戰略溝通,尋求各國在人工智慧軍事應用方面的共識與協作;推進法律法規對話協調,以形成統一規範的法律框架;加強人工智慧倫理的約束,確保技術發展符合道德標準;積極參與全球安全治理合作,共同維護國際社會的和平與穩定。

高度重視國際層面戰略溝通。

人工智慧治理是全球性問題,需要各國通力合作,共同解決。在國際舞台上,各國利益交融與利益衝突並存,因此,透過有效的溝通管道來處理全球性議題成為維護世界和平與發展的關鍵。

一方面,要精準掌握人工智慧國際治理挑戰。既要掌握各國對人工智慧武器化發展的共識,也要密切關注各國在人工智慧武器化應用安全治理方面的政策差異,透過協商合作,使相關倡議與聯合國議程相協調,從而有效防止人工智慧在軍事上的濫用,推動人工智慧用於和平目的。

另一方面,推動各國政府透過官方或半官方對話,達成相關協議,建立戰略互信。相較於政府層面的“1軌對話”,“1.5軌對話”指的是政府官員與民間人士共同參與的對話,而“2軌對話”則是由學者、退休官員等進行的民間非官方形式的對話。這兩種對話形式具有更高的彈性,是政府間官方對話的重要補充和輔助。透過多樣化的對話交流方式,官方和民間人士可以廣泛諮詢軍備控制的可能實現路徑,分享經驗和專業知識,以避免軍備競賽的升級和緊張局勢的惡化。這些對話機制將為各國提供持續的溝通與合作平台,有助於增進相互理解、加強戰略互信,共同因應人工智慧軍事化應用帶來的挑戰。

科學制定人工智慧法律和倫理規約。

人工智慧技術本身並無對錯善惡之分,但對於人工智慧的設計、研發、製造、使用、運作以及維護確有善惡意圖之別。人工智慧武器化引發了廣泛的倫理關注。國際法框架下,自主武器系統是否能夠在複雜戰場上精準區分戰鬥人員與平民?此外,若人工智慧武器系統導致非預期的傷害,其責任歸屬如何界定?將關乎生死的決策權交付於機器,這項做法是否符合道德倫理標準?這些擔憂凸顯了加強人工智慧倫理約束的必要性。

一方面,要堅持倫理先行,從技術源頭融入「智能向善」的概念。在人工智慧軍事系統的設計過程中,將以人為本、智能向善等價值觀內嵌於系統中。其目的是從源頭杜絕人工智慧可能引發的濫殺濫傷行為,控制其過度殺傷力,防範意外毀傷的發生,從而將人工智慧武器系統所帶來的毀傷程度限制在盡可能小的範圍內。目前,國內外已有近百家機構或政府部門發佈各類人工智慧倫理原則文件,學術界和產業界亦就人工智慧基本倫理原則達成共識。 2022年,中國向聯合國遞交的《關於加強人工智慧倫理治理的立場文件》為全球人工智慧倫理監管的發展提供了重要參考。文件明確強調,應透過制度建置、風險管控、協同共治等多方面的措施來推動人工智慧倫理監管。

另一方面,要完善相關法律法規,明確人工智慧主體的權責邊界。制定嚴格的技術審核標準,確保人工智慧系統的安全性和可靠性。在人工智慧系統上線前進行全面的測試,確保其不會對人類生活和社會秩序造成負面影響。明確開發者、使用者、維護者等各方在人工智慧系統全生命週期中的法律責任,以及建立相應的追責機制。

務實參與人工智慧安全治理國際合作。

人工智慧軍事應用所帶來的戰略風險,更凸顯國際安全務實合作的重要性。建議重點從三個面向著手:

一是推動制定人工智慧在軍事領域的運用準則。制定人工智慧軍事應用的行為準則,是各國規範人工智慧軍事應用的重要責任,也是推動國際共識和遵守國際法規的必要措施。中國政府在2021年向聯合國《特定常規武器公約》大會提交了《中國關於規範人工智慧軍事應用的立場文件》,2023年發布《全球人工智慧治理倡議》,這些都為完善規範人工智慧軍事應用的行為準則提供了建設性參考。

二是建立適用的監理架構。人工智慧軍民兩用性使其涉及眾多利益攸關方,一些非國家行為體如非政府組織、技術社群、科技企業在人工智慧全球治理過程中的作用將更加突出,成為人工智慧軍事應用監管框架建設的重要力量。各國可採取的技術監管措施包括:明確人工智慧技術的使用範圍、責任主體和違規處罰措施;加強技術研發,提高技術的安全性和可控性;建立監管機制,對技術的研發和應用進行全程監管,及時發現和解決問題。

三是共同研發人工智慧安全防範技術和解決方案。鼓勵將政府間和軍隊間的雙邊或多邊談判納入軍用人工智慧應用的對話選項,就軍用人工智慧安全防範技術、操作規程及實踐經驗廣泛交流,推動相關風險管理技術標準和使用規範的分享借鑒,為人工智慧軍事化背景下的國際安全互信機制不斷注入新的穩定因素。

(作者為國防科技大學國防科技戰略研究智庫主任、研究員,博導;國防科技大學外國語學院碩士研究生劉胡君對本文亦有貢獻)

中國原創軍事資源:http://paper.people.com.cn/rmlt/pc/content/202502/05/content_30059349.html

Leave a Reply

Your email address will not be published. Required fields are marked *