The advancement of artificial intelligence (AI) has prompted state and federal agencies – at all levels – to initiate regulatory efforts and guidelines aimed at balancing innovation with workplace safeguards. A major concern across all regulatory agencies is the potential for inadvertent discriminatory practices.
Automated decision-making systems — which may rely on algorithms or AI — are increasingly used in employment settings to facilitate a wide range of decisions related to job applicants or employees, including recruitment, hiring, and promotion. While such tools provide myriad benefits, they can also exacerbate existing biases and contribute to discriminatory outcomes (e.g., AI tool that rejects women applicants by mimicking the existing features of a company’s male-dominated workforce or a job advertisement delivery system that reinforces gender and racial stereotypes by directing certain jobs to women or workers of color).
Over the next few weeks, we’ll be highlighting a few of the key regulatory schemes and guidelines impacting an employer’s use of AI and other automated decision-making technologies when making employment-related decisions.
Colorado’s SB 24-205: Consumer Protections in Interactions with Artificial Intelligence Systems:
On May 17, 2024, Colorado passed the first state law offering consumer protection from harm resulting from the use of Artificial Intelligence (AI). The newly enacted law imposes a duty on Colorado employers to protect consumers (including employees) from any known or reasonably foreseeable risks of algorithmic discrimination. The new regulations apply to “high-risk AI systems” which includes AI systems that make or are a substantial factor in making employment-related decisions.i Identifying employers as ‘deployers’ of AI, the statute focuses on these key compliance areas:
- Risk-management: Deployers must adopt a risk-management policy and program meeting certain defined criteria. The policy and program must be reasonable in accordance with statutory guidelines, regularly reviewed and updated.
- Impact assessment: Deployers must complete annual impact assessments for high-risk AI systems. Assessments must, at a minimum, include:
- A statement of purpose and intended use.
- Benefits of the system and an analysis of whether the system poses known or reasonably foreseeable risks of algorithmic discrimination including a description of how the deployer mitigates those risks.
- Summary of the data processed as inputs and outputs of the system.
- An overview of the categories of data, if any, the deployer used to customize the system, metrics used to evaluate the performance and known limitations of the system.
- A description of transparency measures taken, including measures taken to disclose the use of the system to consumers and a description of the post-deployment monitoring and user safeguards provided concerning the system.
- Notice. Various notices to Colorado residents. This includes a prior-use notice, notice to applicants, notice of types of AI systems currently in use including the known or reasonably foreseeable risk of algorithmic discrimination.
- Self-Disclosure Requirement: Deployers are required to self-disclose to the Colorado Attorney General the discovery of algorithmic discrimination within their AI systems within 90 days after the discovery.
Gov. Polis’s signing statement reflects his lingering concerns on this issue and encourages the legislature to “significantly improve” the law before it takes effect February 1, 2026.
Colorado employers using or considering the use of AI in their employment processes should begin auditing current (or future) AI practices to ensure alignment with SB 24-205 and any forthcoming amendments ahead of the February 1, 2026, effective date.