Colorado’s Governor signed into law the Colorado Artificial Intelligence Act (CAIA), which addresses consequential algorithmic decision-making made by or with a significant input from AI.
The CAIA prohibits AI-powered algorithmic decision-making resulting in discrimination against Colorado residents on the basis of actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under Colorado or federal laws.
Discrimination is prohibited when making algorithmic decisions regarding:
- employment or an employment opportunity;
- education enrollment or an education opportunity;
- a financial or lending service;
- an essential government service;
- healthcare services;
- housing;
- insurance; or
- a legal service.
The CAIA calls AI systems “high-risk AI systems” when they are used in decision-making in the above contexts.
Obligations of Developers and Deployers
Under the CAIA, developers of high-risk AI systems will have a duty of reasonable care to protect consumers from algorithmic discrimination when those systems are used for their intended purposes. The CAIA also imposes certain testing, reporting, risk management, and documentation requirements on high-risk AI developers.
Deployers (users) of high-risk AI systems will also be required to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination.
It will be presumed that reasonable care was exercised by both developers and deployers unless proven otherwise by the Colorado Attorney General (AG).
With some exceptions (for example, if a deployer employs fewer than 50 full-time employees or does not use its own data to train the high-risk AI system), deployers will also be required to:
- implement a reasonable risk management policy, considering the nature of the AI systems deployed, the sensitivity of data processed, the size and complexity of the deployer, and the guidance and standards set forth in the “Artificial Intelligence Risk Management Framework” published by the National Institute of Standards and Technology in the United States Department of Commerce, Standard ISO/IEC 42001 of The International Organization for Standardization, or a similar standard; and
- complete an impact assessment at least annually and within 90 days after making intentional and substantial modifications to the high-risk AI system.
Consumer Rights
Under the CAIA, high-risk AI deployers will be required to provide to consumers:
- a notice that a high-risk AI system is being used to make consequential decisions about them;
- a plain-language disclosure and explanation of the AI system in question;
- an opportunity to opt-out from such consequential AI decision-making;
- a disclosure of the reasons for an adverse decision, the degree to which high-risk AI systems contributed to this, the type of data processed by the AI system, and the source of that data;
- an opportunity to correct any incorrect personal data that the high-risk AI system processed in making the decision; and
- an opportunity to appeal an adverse consequential decision made by the high-risk AI system.
Effective Date; Enforcement
The CAIA will become effective on February 1, 2026, and will be enforced by the Colorado AG, who is also authorized to promulgate rules to implement the Act. Violations of the Act will be penalized as unfair or deceptive trade practices under Colorado law.
Conclusion
This action by the Colorado legislature reflects a broader movement among states to regulate AI-driven algorithmic decision-making. Moreover, it is important to note that the existing federal anti-discrimination laws may apply in algorithmic decision-making situations.
So, businesses nationwide should stay vigilant in their AI implementation practices and seek legal counsel to navigate the applicable requirements and reduce their legal exposure.
NAVIGATE THE BURGEONING DOMAIN OF
ARTIFICIAL INTELLIGENCE-ASSISTED TELEMARKETING WITH
The CommLaw Group!
In our Artificial Intelligence (AI) practice, we leverage our established subject matter expertise in data privacy, intellectual property law, and regulatory compliance (telemarketing and more) with our proven ability to successfully navigate the ever developing and uncertain technology law landscapes. Our attorney ranks include publishing experts, particularly in the field of legal matters related to AI, whose publications got international traction. We closely follow regulatory and case law developments to guide businesses, developers, and investors on AI-related legal compliance and legal risk mitigation.
If you have any questions about the CAIA’s implications for your business,
PLEASE CONTACT US NOW, WE ARE STANDING BY TO GUIDE YOUR COMPANY’S COMPLIANCE EFFORTS
Jonathan S. Marashlian – Tel: 703-714-1313 / E-mail: jsm@CommLawGroup.com
Michael Donahue — Tel: 703-714-1319 / E-mail: mpd@CommLawGroup.com
Robert H. Jackson – (703) 714-1316 / E-mail: rhj@commlawgroup.com
Linda McReynolds – Tel: 703-714-1318 / E-mail: lgm@CommLawGroup.com
Diana James – Tel: 703-663-6757 / E-mail: daj@CommLawGroup.com