中國的立場文件:規範人工智慧的軍事應用
現代英語:
The rapid development and widespread application of artificial intelligence technology are profoundly changing human production and lifestyles, bringing huge opportunities to the world while also bringing unpredictable security challenges. It is particularly noteworthy that the military application of artificial intelligence technology may have far-reaching impacts and potential risks in terms of strategic security, governance rules, and moral ethics.
AI security governance is a common issue facing mankind. With the widespread application of AI technology in various fields, all parties are generally concerned about the risks of AI military applications and even weaponization.
Against the backdrop of diverse challenges facing world peace and development, all countries should uphold a common, comprehensive, cooperative and sustainable global security concept and, through dialogue and cooperation, seek consensus on how to regulate the military applications of AI and build an effective governance mechanism to prevent the military applications of AI from causing significant damage or even disasters to humanity.
Strengthening the regulation of the military application of artificial intelligence and preventing and controlling the risks that may arise will help enhance mutual trust among countries, maintain global strategic stability, prevent an arms race, alleviate humanitarian concerns, and help build an inclusive and constructive security partnership and practice the concept of building a community with a shared future for mankind in the field of artificial intelligence.
We welcome all parties including governments, international organizations, technology companies, research institutes and universities, non-governmental organizations and individual citizens to work together to promote the safe governance of artificial intelligence based on the principle of extensive consultation, joint construction and sharing.
To this end, we call for:
– In terms of strategic security, all countries, especially major powers, should develop and use artificial intelligence technology in the military field with a prudent and responsible attitude, not seek absolute military advantage, and prevent exacerbating strategic misjudgments, undermining strategic mutual trust, triggering escalation of conflicts, and damaging global strategic balance and stability.
– In terms of military policy, while developing advanced weapons and equipment and improving legitimate national defense capabilities, countries should bear in mind that the military application of artificial intelligence should not become a tool for waging war and pursuing hegemony, and oppose the use of the advantages of artificial intelligence technology to endanger the sovereignty and territorial security of other countries.
– In terms of legal ethics, countries should develop, deploy and use relevant weapon systems in accordance with the common values of mankind, adhere to the people-oriented principle, uphold the principle of “intelligence for good”, and abide by national or regional ethical and moral standards. Countries should ensure that new weapons and their means of warfare comply with international humanitarian law and other applicable international law, strive to reduce collateral casualties, reduce human and property losses, and avoid the misuse of relevant weapon systems and the resulting indiscriminate killing and injury.
– In terms of technical security, countries should continuously improve the security, reliability and controllability of AI technology, enhance the security assessment and control capabilities of AI technology, ensure that relevant weapon systems are always under human control, and ensure that humans can terminate their operation at any time. The security of AI data must be guaranteed, and the militarized use of AI data should be restricted.
– In terms of R&D operations, countries should strengthen self-discipline in AI R&D activities, and implement necessary human-machine interactions throughout the weapon life cycle based on comprehensive consideration of the combat environment and weapon characteristics. Countries should always insist that humans are the ultimate responsible party, establish an AI accountability mechanism, and provide necessary training for operators.
– In terms of risk management, countries should strengthen supervision of the military application of artificial intelligence, especially implement hierarchical and classified management to avoid the use of immature technologies that may have serious negative consequences. Countries should strengthen the research and judgment of the potential risks of artificial intelligence, including taking necessary measures to reduce the risk of proliferation of military applications of artificial intelligence.
——In rule-making, countries should adhere to the principles of multilateralism, openness and inclusiveness. In order to track technological development trends and prevent potential security risks, countries should conduct policy dialogues, strengthen exchanges with international organizations, technology companies, technology communities, non-governmental organizations and other entities, enhance understanding and cooperation, and strive to jointly regulate the military application of artificial intelligence and establish an international mechanism with universal participation, and promote the formation of an artificial intelligence governance framework and standard specifications with broad consensus.
– In international cooperation, developed countries should help developing countries improve their governance level. Taking into account the dual-use nature of artificial intelligence technology, while strengthening supervision and governance, they should avoid drawing lines based on ideology and generalizing the concept of national security, eliminate artificially created technological barriers, and ensure that all countries fully enjoy the right to technological development and peaceful use.
現代國語:
人工智慧技術的快速發展及其廣泛應用,正深刻改變人類生產和生活方式,為世界帶來巨大機會的同時,也帶來難以預測的安全挑戰。特別值得關注的是,人工智慧技術的軍事應用,在戰略安全、治理規則、道德倫理等方面可能產生深遠影響和潛在風險。
人工智慧安全治理是人類面臨的共同課題。隨著人工智慧技術在各領域的廣泛應用,各方普遍對人工智慧軍事應用甚至武器化風險感到擔憂。
在世界和平與發展面臨多元挑戰的背景下,各國應秉持共同、綜合、合作、永續的全球安全觀,透過對話與合作,就如何規範人工智慧軍事應用尋求共識,建構有效的治理機制,避免人工智慧軍事應用為人類帶來重大損害甚至災難。
加強對人工智慧軍事應用的規範,預防和管控可能引發的風險,有利於增進國家間互信、維護全球戰略穩定、防止軍備競賽、緩解人道主義關切,有助於打造包容性和建設性的安全夥伴關係,在人工智慧領域實踐建構人類命運共同體理念。
我們歡迎各國政府、國際組織、技術企業、科研院校、民間機構和公民個人等各主體秉持共商共建共享的理念,協力共同促進人工智慧安全治理。
為此,我們呼籲:
——戰略安全上,各國尤其是大國應本著慎重負責的態度在軍事領域研發和使用人工智慧技術,不謀求絕對軍事優勢,防止加劇戰略誤判、破壞戰略互信、引發衝突升級、損害全球戰略平衡與穩定。
——在軍事政策上,各國在發展先進武器裝備、提高正當國防能力的同時,應銘記人工智慧的軍事應用不應成為發動戰爭和追求霸權的工具,反對利用人工智慧技術優勢危害他國主權和領土安全的行為。
——法律倫理上,各國研發、部署和使用相關武器系統應遵循人類共同價值觀,堅持以人為本,秉持「智能向善」的原則,遵守國家或地區倫理道德準則。各國應確保新武器及其作戰手段符合國際人道法和其他適用的國際法,努力減少附帶傷亡、降低人員財產損失,避免相關武器系統的誤用惡用,以及由此引發的濫殺。
——在技術安全上,各國應不斷提昇人工智慧技術的安全性、可靠性和可控性,增強對人工智慧技術的安全評估和管控能力,確保相關武器系統永遠處於人類控制之下,保障人類可隨時中止其運作。人工智慧資料的安全必須得到保證,應限制人工智慧資料的軍事化使用。
——研發作業上,各國應加強對人工智慧研發活動的自我約束,在綜合考慮作戰環境和武器特性的基礎上,在武器全生命週期實施必要的人機互動。各國應時常堅持人類是最終責任主體,建立人工智慧問責機制,對操作人員進行必要的訓練。
——風險管控上,各國應加強對人工智慧軍事應用的監管,特別是實施分級、分類管理,避免使用可能產生嚴重負面後果的不成熟技術。各國應加強對人工智慧潛在風險的研判,包括採取必要措施,降低人工智慧軍事應用的擴散風險。
——規則制定上,各國應堅持多邊主義、開放包容的原則。為追蹤科技發展趨勢,防範潛在安全風險,各國應進行政策對話,加強與國際組織、科技企業、技術社群、民間機構等各主體交流,增進理解與協作,致力於共同規範人工智慧軍事應用並建立普遍參與的國際機制,推動形成具有廣泛共識的人工智慧治理框架和標準規範。
——國際合作上,已開發國家應協助發展中國家提升治理水平,考慮到人工智慧技術的軍民兩用性質,在加強監管和治理的同時,避免採取以意識形態劃線、泛化國家安全概念的做法,消除人為製造的科技壁壘,確保各國充分享有技術發展與和平利用的權利。