Colorado Enacts Landmark AI Legislation, setting a Marker for Other States to Follow

On May 17, 2024, the Colorado Governor signed SB 205, which will become the most comprehensive state AI law enacted to-date in the United States. Coupled with President Biden’s AI Executive Order in October 2023, along with the potential for bipartisan federal legislation or other follow-on state laws, the Colorado law sends a strong signal that any developer or deployer of AI tools should be tracking regulatory developments closely. The new law adopts a classification system like the EU’s AI Act and will have significant ramifications for certain sectors, including financial services, insurance, health care, education, and government, as well as for employers who use automated employment decision-making tools like job candidate screening software.

Overview

SB 205, “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems,” is primarily focused on “high-risk artificial intelligence systems” – defined as AI systems that make, or are a substantial factor in making, a “consequential decision” – and, in particular, protecting consumers from risks of “algorithmic discrimination” arising from the use of such systems. Under the new law, an AI system is broadly defined to include any machine-based system that generates outputs based on what the tool infers from any system inputs. This would include machine learning (ML) and automated decision-making tools (ADTs).

Under SB 205: (i) use of an AI system that results in unlawful differential treatment or impact that disfavors an individual or group on the basis of their actual or perceived membership in a protected class would be deemed to constitute “algorithmic discrimination;” and (ii) a “consequential decision” would be a decision that has a material legal or similarly significant effect on the provision or denial of items like education, employment, lending or credit, an essential government service, health care, housing, insurance, or a legal service.

General Duty to Avoid Algorithmic Discrimination

Subject to limited enumerated exemptions (in particular, in connection with legal compliance), SB 205 imposes a “duty to avoid algorithmic discrimination” on both developers and deployers/users doing business in the state of Colorado (without volume or dollar thresholds), including the following obligations. More specific requirements will be set out in the law’s implementing regulations to be issued by the Colorado Attorney General.

Compliance Obligations for Developers (i.e., those who develop or substantially modify any AI tool or system): Beginning February 1, 2026, developers of AI tools and systems will be required to:

  • Exercise Reasonable Care: Developers must use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination from the use of a high-risk AI tool or system.
  • Provide Documentation for Deployers and other Developers: Subject to limited exceptions, developers must make available to each deployer/user or other developer of the high-risk AI tool or system:
    • a statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk AI tool or system;
    • documentation disclosing high-level summaries of the type(s) of data used to train the high-risk AI tool or system; known or reasonably foreseeable limitations of the high-risk AI tool or system, including known or reasonably foreseeable risks of algorithmic discrimination; and the purpose, and the intended benefits and uses, of the high-risk AI tool or system;
    • documentation describing how the high-risk AI tool or system was evaluated for performance and mitigation of algorithmic discrimination; data governance measures; intended outputs; measures taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination; and how the high-risk AI tool or system should be used, not be used, and monitored; and
    • to the extent feasible, the documentation and information necessary for the deployer/user or other developer to conduct an impact assessment.
  • Issue a Public Website Statement: Developers must make publicly available (e.g., on their website) a summary statement with respect to high-risk artificial intelligence systems, including how they manage known or reasonably foreseeable risks of algorithmic discrimination.
  • Notify the Attorney General: Developers must notify the Colorado Attorney General within 90 days following either (i) discovery that a high-risk AI tool or system caused algorithmic discrimination or (ii) receipt from a deployer/user of a credible report of algorithmic discrimination caused by such system.

Compliance Obligations for Deployers/Users of High-risk AI Tools and Systems: Beginning February 1, 2026, deployers/users of high-risk artificial intelligence systems (such as employers) will be required to, among other things:

  • Exercise Reasonable Care: Deployers/users must use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination from the use of the high-risk artificial intelligence system.
  • Implement a Risk Management Policy and Program: Deployers/users must implement a risk management policy and program to govern its deployment of the high-risk AI tool or system. Such AI policy and program must:
    • Identify and Mitigate Risks of Algorithmic Discrimination: describe the principles, processes, and personnel that the deployer/user uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination;
    • Conduct Regular Reviews and Updates: be regularly and systematically reviewed and updated over the life cycle of the high-risk AI tool or system; and
    • Ensure Reasonableness: be reasonable in light of established generally accepted standards (such as NIST’s AI Risk Management Framework), the size and complexity of the deployer/user, the nature and scope of the high-risk artificial intelligence system, and the sensitivity and volume of the data processed.

Deployers must provide a copy of their risk management policy to the state attorney general within 90 days following request.

  • Conduct an AI Impact Assessment: Deployers must complete an impact assessment for each deployed high-risk AI tool or system, at least annually and within 90 days after any substantial modification. Deployers must retain a copy of the impact assessment (and all associated records) for at least three years following deployment. Deployers must provide a copy of the impact assessment to the state attorney general within 90 days following request.
  • Conduct Recurring Reviews: Deployers must review each deployed high-risk AI tool or system, both before February 1, 2026, and at least annually thereafter, to ensure that such system is not causing algorithmic discrimination.
  • Ensure Transparency and Consumer Rights: Deployers must provide to the consumer a notice of the deployment of the high-risk AI tool or system; a statement setting forth, among other things, the purpose of the system, the nature of the consequential decision, and a plain-language description of the system; if applicable, information about the consumer’s right under the Colorado Privacy Act to opt out of processing of personal data for certain profiling; and additional information and opportunities if the consequential decision is adverse to the consumer.
  • Notify the Attorney General: Deployers must notify the Colorado attorney general within 90 days following discovery that the deployed high-risk AI tool or system caused algorithmic discrimination.
  • Issue a Public Website Statement: Deployers must provide and periodically update a clear and readily available summary statement on their website with respect to their high-risk AI tools or systems, including how they manage known or reasonably foreseeable risks of algorithmic discrimination.
Transparency of AI Systems that Interact with Consumers

Subject to an exception for “obviousness,” SB 205 also requires deployers/users of AI tools and systems that are intended to interact with consumers to disclose to each such consumer that they are interacting with an AI tool or system. Note that this requirement would not be limited just to high-risk AI tools and systems, but rather to any AI tools and systems.

Enforcement

SB 205 does not provide a private right of action, and the Colorado Attorney General would have exclusive authority to issue implementing regulations and enforce the Bill. SB 205 expressly provides a developer or deployer/user an “affirmative defense” in an enforcement action if it (i) discovers and cures a violation in accordance with one of several specified processes and (ii) is otherwise in compliance with one of several specified risk management frameworks for artificial intelligence systems.

Conclusion

Colorado’s SB 205 represents a significant development in the AI legal and regulatory landscape and follows closely in the wake of a narrower law – requiring certain disclosures to persons interacting with certain AI systems – that went into effect in Utah at the start of the month. As with data breach laws and, more recently, comprehensive state privacy laws, it seems possible that Colorado’s AI law will be followed by similar state laws in the coming years – unless and until comprehensive federal regulations are enacted.

If you’d like to discuss SB 205, the evolving AI legal and regulatory landscape generally, or your AI governance program, please reach out to the Key Contacts listed above.