All Articles

Colorado AI Act (SB 205): Prepare Before June 2026

Colorado's AI law creates compliance obligations for companies using AI in employment, lending, or insurance decisions. Learn how to prepare.

Tenlines Team13 min read

The First Comprehensive State AI Law

On May 17, 2024, Colorado Governor Jared Polis signed Senate Bill 24-205 into law, making Colorado the first U.S. state to enact comprehensive legislation regulating the use and development of AI systems. The law takes effect June 30, 2026.

Governor Polis signed the bill "with reservations," acknowledging in his signing statement that SB 205 "creates a complex compliance regime for all developers and deployers of AI doing business in Colorado." He also noted this is "among the first in the country to attempt to regulate the burgeoning artificial intelligence industry on such a scale."

If your organization uses AI systems that affect Colorado residents — even if you're headquartered elsewhere — you need to understand these requirements.

What the Colorado AI Act Covers

High-Risk AI Systems

The law focuses on "high-risk artificial intelligence systems" — AI that makes or substantially influences "consequential decisions" in these areas:

  • Employment: Hiring, promotion, termination, compensation decisions
  • Education: Admissions, scholarship eligibility, academic assessments
  • Financial Services: Loan approvals, credit decisions, insurance underwriting
  • Healthcare: Treatment recommendations, coverage determinations
  • Housing: Rental applications, mortgage approvals
  • Legal Services: Case assessments, resource allocation
  • Government Services: Benefits eligibility, licensing decisions

If your AI system "makes or is a substantial factor in making" a decision that has "a material legal or similarly significant effect" in any of these categories, you're in scope.

Who Must Comply

The law creates obligations for two categories of organizations:

Developers: Companies that create AI systems and make them available to others. This includes AI vendors, platform providers, and any organization that builds and distributes AI tools.

Deployers: Organizations that use AI systems in their operations. This is most enterprises — if you're using an AI-powered hiring tool, lending algorithm, or customer service system that makes consequential decisions, you're a deployer.

Important: If you significantly modify an off-the-shelf AI system — for example, by retraining a large language model on your proprietary data — you may be treated as a developer with full compliance obligations.

Core Requirements

For Developers

If you build or sell AI systems, you must:

1. Provide comprehensive documentation to deployers, including:

  • General description of the AI system's intended uses
  • Known harmful or inappropriate uses to avoid
  • High-level summaries of training data types
  • Known limitations and risks of algorithmic discrimination
  • Documentation of bias testing and performance evaluations
  • Data governance measures used in development

2. Make public statements on your website describing:

  • The types of high-risk AI systems you develop
  • How you manage risks of algorithmic discrimination

3. Disclose to the Attorney General any known or reasonably foreseeable risks of algorithmic discrimination within 90 days of discovery.

For Deployers

If you use AI systems to make consequential decisions, you must:

1. Implement a risk management policy and program that:

  • Identifies and documents all high-risk AI systems in use
  • Assesses risks of algorithmic discrimination
  • Establishes governance structures for AI oversight
  • Conducts regular impact assessments

2. Complete impact assessments for each high-risk AI system, evaluating:

  • The purpose and intended use
  • Analysis of whether the system poses risks of algorithmic discrimination
  • Categories of data processed and outputs produced
  • Transparency measures in place
  • Post-deployment monitoring procedures

3. Provide consumer notices before using AI to make consequential decisions, informing individuals that:

  • AI is being used
  • What type of decision the AI is making or influencing
  • How to request human review of the decision
  • How to appeal or correct inaccurate data

4. Enable human review by providing a process for consumers to:

  • Appeal AI-driven decisions
  • Correct data that was used in the decision
  • Request human oversight of automated decisions

What Is Algorithmic Discrimination?

The law defines algorithmic discrimination as "any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact" based on protected characteristics:

  • Age
  • Race, color, ethnicity, national origin
  • Religion
  • Sex, sexual orientation, gender identity
  • Disability
  • Veteran status
  • Genetic information

Critical distinction: The law doesn't prohibit all differential outcomes. It targets unlawful differential treatment — discrimination that would violate existing civil rights laws. However, the burden is on deployers to demonstrate they've taken reasonable care to prevent such outcomes.

Enforcement and Penalties

Who Enforces

The Colorado Attorney General has exclusive enforcement authority. Violations are treated as deceptive trade practices under the Colorado Consumer Protection Act.

Potential Penalties

While specific fine amounts aren't enumerated in the statute, violations constitute unfair trade practices, which can result in:

  • Injunctive relief
  • Civil penalties
  • Consumer restitution
  • Legal fees and costs

The reputational damage and litigation costs may exceed direct penalties for many organizations.

Affirmative Defenses

The law provides important safe harbors. You may have an affirmative defense if you:

1. Discover and cure violations through:

  • User feedback mechanisms
  • Adversarial testing or red teaming (as defined by NIST)
  • Internal review processes

2. Comply with recognized frameworks such as:

  • NIST AI Risk Management Framework
  • ISO/IEC 42001
  • Other internationally recognized AI governance frameworks

This is significant: organizations with mature AI governance programs aligned to NIST or ISO standards have stronger defenses against enforcement actions.

Exemptions

Some AI uses fall outside the law's scope:

Self-testing AI: Systems used solely to test for discrimination or bias in other systems

Diversity tools: AI used to increase diversity or redress historical discrimination

Narrow procedural tasks: AI that performs routine administrative functions without substantive decision-making

Pattern detection: Systems that detect decision-making patterns without influencing actual decisions

Federally regulated systems: AI systems already subject to equivalent federal oversight (FDA-approved medical devices, FAA-certified aviation systems)

Chatbots with disclosure: AI-powered chatbots that disclose they are AI-based (these only need the disclosure, not the full compliance framework)

Preparing for Compliance: A Practical Roadmap

Phase 1: Discovery (Now through Q2 2026)

Inventory your AI systems. Document every AI tool your organization uses, including:

  • Vendor-provided AI embedded in HR, finance, and CRM software
  • Custom-built AI systems and models
  • Third-party APIs and AI services
  • AI features in existing enterprise software

Classify risk levels. For each system, determine:

  • Does it make or influence consequential decisions?
  • Which decision categories does it affect?
  • Who are the affected individuals (employees, customers, applicants)?

Identify your role. For each high-risk system:

  • Are you the developer, deployer, or both?
  • If you've customized or retrained a vendor system, you may have developer obligations

Phase 2: Assessment (Q1-Q2 2026)

Conduct impact assessments for each high-risk system:

  • Document purpose and intended use
  • Analyze training data for potential bias
  • Evaluate outputs for discriminatory patterns
  • Map data flows and identify sensitive information

Engage vendors. For third-party AI tools:

  • Request documentation required under the law
  • Confirm vendor compliance with developer obligations
  • Negotiate contractual protections

Test for bias. Implement testing protocols to identify:

  • Disparate impact across protected groups
  • Accuracy variations by demographic
  • Edge cases that produce unfair outcomes

Phase 3: Implementation (Q2 2026)

Deploy risk management program:

  • Assign governance responsibilities
  • Establish review and approval processes
  • Create incident response procedures

Implement consumer notices:

  • Update terms of service and privacy policies
  • Create point-of-decision disclosures
  • Build appeal and review request mechanisms

Document everything:

  • Maintain records of assessments, testing, and decisions
  • Log consumer requests and responses
  • Track system performance over time

Phase 4: Ongoing Compliance (Post-June 2026)

Monitor and test continuously:

  • Regular bias audits
  • Performance monitoring
  • User feedback analysis

Update assessments:

  • Reassess when systems change
  • Document material modifications
  • Review vendor updates

Special Considerations for AI-Enabled Workplaces

Employment AI Is High-Risk by Default

If you're using AI anywhere in the employee lifecycle — recruiting, screening, interviewing, performance management, promotion decisions, compensation analysis, or termination — assume it's high-risk.

Common tools that likely require compliance:

  • Resume screening and applicant tracking systems
  • Video interview analysis tools
  • Skills assessments and testing platforms
  • Performance prediction models
  • Workforce planning algorithms

The Shadow AI Problem

Here's where it gets complicated: employees may be using AI tools for work tasks without IT approval. If an HR manager uses ChatGPT to help draft performance reviews or screen candidates, that's potentially a high-risk AI deployment — even if it was never sanctioned.

You can't comply with Colorado's law if you don't know what AI systems are in use. Shadow AI creates invisible compliance gaps.

How Tenlines Helps

Colorado's AI Act requires organizations to protect consumers from algorithmic discrimination and maintain transparency about AI decision-making. But you can't govern AI you can't see.

Tenlines addresses the shadow AI challenge by:

Providing visibility: See which AI tools employees are using and what data they're sharing

Protecting sensitive data: Scrub PII and protected characteristics before they reach AI systems, reducing discrimination risk from the start

Maintaining audit trails: Document AI interactions for compliance and incident response

Enabling safe AI use: Let employees use AI productively while ensuring data stays protected

When sensitive data never reaches the AI system in the first place, you reduce both the risk of algorithmic discrimination and the scope of your compliance obligations.

Key Takeaways

  1. The deadline is June 30, 2026. Start preparing now — compliance requires inventory, assessment, and implementation work that takes months.

  2. Scope is broad. Any AI that affects employment, lending, housing, insurance, or similar decisions for Colorado residents is likely in scope.

  3. Both developers and deployers have obligations. If you use AI to make decisions, you're a deployer with compliance requirements.

  4. Framework alignment matters. Organizations following NIST AI RMF or ISO 42001 have affirmative defenses built in.

  5. Shadow AI is a compliance gap. You can't comply with laws governing AI you don't know exists.

Stop data leakage before it starts

Tenlines sits between your team and AI providers, scrubbing sensitive data before it leaves your environment. No workflow changes required.

Join the Waitlist