The United States, United Kingdom, Australia, and a coalition of 15 nations have jointly unveiled a comprehensive set of global guidelines aimed at safeguarding artificial intelligence (AI) models against tampering. The newly released guidelines advocate for the adoption of a "secure by design" approach in AI development and usage, emphasizing cybersecurity practices throughout the AI lifecycle.
The 20-page document, released by 18 countries, offers a framework delineating essential cybersecurity measures for AI firms during the design, development, deployment, and ongoing monitoring of AI models. The coalition highlighted that in the rapid evolution of the industry, cybersecurity often takes a backseat, necessitating a concerted effort to prioritize security in AI innovation, according to report by Cointelegraph.
Outlined in the guidelines are several key recommendations, encompassing the imperative of maintaining stringent control over the infrastructure supporting AI models. Additionally, the document emphasizes continuous monitoring to detect any unauthorized modifications to models, both pre- and post-release. Furthermore, it stresses the importance of cybersecurity training for staff to mitigate associated risks effectively.
However, notable omissions from the guidelines involve contentious aspects within the AI landscape, such as regulating image-generating models, deep fakes, data collection methods, and their implications for model training. These issues, including copyright infringement concerns, have led to legal actions against multiple AI firms.
U.S. Secretary of Homeland Security, Alejandro Mayorkas, underscored the significance of cybersecurity in shaping safe and trustworthy AI systems, stating, "We are at a critical juncture in AI development, a technology that holds immense potential. Cybersecurity remains pivotal in ensuring the safety and reliability of AI systems."
This initiative aligns with other government efforts concerning AI oversight. Earlier this month, governments and AI firms convened at an AI Safety Summit in London, aiming to establish consensus on AI development practices. Concurrently, the European Union deliberates on its AI Act framework, and President Joe Biden's executive order in October aimed to set AI safety and security standards—although these efforts have encountered resistance from the AI industry over fears of stifling innovation.
Among the co-signatories to these "secure by design" guidelines are Canada, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, South Korea, and Singapore. Major AI firms such as OpenAI, Microsoft, Google, Anthropic, and Scale AI also contributed significantly to the formulation of these guidelines, underscoring the collaborative approach taken towards bolstering AI security on a global scale.