International Cybersecurity Consortium Led by U.S. Cybersecurity and Infrastructure Security Agency and UK NCSC Adopt Guidelines for Secure AI System Development, Setting Baseline Standards for the Industry’s Self-Regulation
23 domestic and international cybersecurity organizations representing all continents, with contributions from several Big Tech companies and research institutions, recently published the Guidelines for Secure AI System Development. This publication marks a significant step in addressing the intersection of artificial intelligence, cybersecurity, and critical infrastructure.
The Guidelines, complementing the U.S. Voluntary Commitments on Ensuring Safe, Secure, and Trustworthy AI, provide essential recommendations for AI system development and emphasize the importance of adhering to Secure by Design principles. The guidelines offer security considerations and risk mitigations across four stages of the AI development lifecycle:
- Secure design
- Secure development
- Secure deployment
- Secure operation and maintenance.
The principles prioritize:
- taking ownership of security outcomes for customers
- embracing radical transparency and accountability
- building organizational structure and leadership so secure by design is a top business priority.
The Guidelines apply to all types of AI systems, not just frontier models. The Guidelines provide suggestions and mitigations that will help data scientists, developers, managers, decision-makers, and risk owners make informed decisions about the secure design, model development, system development, deployment, and operation of their machine learning AI systems. The document is aimed primarily at providers of AI systems, whether based on models hosted by an organization or making use of external application programming interfaces.
The Guidelines emphasize that developers of AI systems should bear the primary responsibility for ensuring the security of these systems, as opposed to shifting the responsibility onto system users.
While not legally binding, the Guidelines encourage AI system developers to examine and compare their AI development practices with the recommendations provided by CISA and NCSC. Various state and federal laws mandate companies to implement “reasonable” or “appropriate” security measures to safeguard their IT systems and data. Determining what security practices are considered reasonable or appropriate for AI systems remains a developing and largely unresolved issue. Regulators aiming to enforce these laws might reference the Guidelines as a basis for establishing legally required security practices deemed reasonable or appropriate for AI.
NAVIGATE THE BURGEONING DOMAIN OF ARTIFICIAL INTELLIGENCE LAW WITH
The CommLaw Group!
In our Artificial Intelligence (AI) practice, we leverage our established subject matter expertise in data privacy, intellectual property law, and regulatory compliance with our proven ability to successfully navigate the ever developing and uncertain technology law landscapes. Our attorney ranks include publishing experts, particularly in the field of legal matters related to AI, whose publications got international traction. We closely follow regulatory and case law developments to guide businesses, developers, and investors on AI-related legal compliance and legal risk mitigation.
CONTACT US NOW, WE ARE STANDING BY TO GUIDE YOUR COMPANY’S COMPLIANCE EFFORTS
Jonathan S. Marashlian – Tel: 703-714-1313 / E-mail: jsm@CommLawGroup.com
Michael Donahue — Tel: 703-714-1319 / E-mail: mpd@CommLawGroup.com
Linda McReynolds – Tel: 703-714-1318 / E-mail: lgm@CommLawGroup.com
Diana Bikbaeva – Tel: 703-663-6757 / E-mail: dab@CommLawGroup.com