California AI Rules Go into Effect Today - What Employers Need to Know
California Artificial Intelligence Regulations Take Effect Today, October 1, 2025
What Do Employers Need to Know, and Do?
Today, new regulations governing the use of artificial intelligence (AI) in employment decisions go into effect in California. The new regulations are amendments to regulations under California’s Fair Employment and Housing Act (FEHA), which prohibits discrimination in employment and other areas.
In practical terms, the regulations confirm that employers’ use of “automated decision systems” (ADS) to make, or “facilitate” the making of, employment decisions can violate the anti discrimination provisions of FEHA.
The new regulations create a strong incentive for employers to conduct proactive bias audits of AI tools used in employment to assess compliance with the law and to identify and address potential discrimination.
We provide below our recommendations for employers as well as a description of the regulations/new requirements.
First, Our Recommendations
Silberman Law recommends CA employers consider the following steps/actions to ensure compliance with California’s updated regulations:
- Get Ahead and Stay Ahead of the Curve – Create an internal task force to address quickly evolving AI regulations. The task force should include, at least, members from HR, TA, IT, and Legal.
- Carefully Inventory AI Tools Being Used – Take inventory of tools used to make employment decisions and categorize each as within or outside the scope of the amended regulations.
- Create Guardrails – Create clear rules on the adoption and use of ADS for employment decisions. Train personnel who use these tools on the rules and risks associated with non compliance.
- Conduct Risk Assessment Audits – Conduct regular bias audits to evaluate whether AI tools result in disparities based on race, ethnicity, sex, or any other categories protected by law. Consider the quality, efficacy, recency, and scope of any bias audits.
- Cloak It In Legal Privilege – Conduct this important, sensitive work under legal privilege with a thoughtful strategy on if (and how) to disclose/present, and/or respond to, results.
And Now, the Substance
The regulations define an ADS as “[a] computational process that makes a decision or facilitates human decision-making regarding employment,” including processes that “may be derived from and/or use artificial intelligence, machine-learning, algorithms, statistics, and/or other data processing techniques.”
The regulations define AI as “[a] machine-based system that infers, from the input it receives, how to generate outputs,” which can include “predictions, content, recommendations, or decisions.”
Examples of covered decisions and tools include, but are not limited to –
- Screening resumes for key words and terms or patterns;
- Assessing applicants or employees with “questions, puzzles, games, and other challenges;” Directing job advertisements to targeted groups;
- Evaluating facial expressions and speech and voice patterns in online interviews;
- Analyzing applicant or employee information from third parties.
Bottom line – The regulations govern a broad range of AI tools that either replace or facilitate human decision making.
Pre-Employment Practices
An often-touted benefit of employment-based AI tools is greatly increased efficiency in the hiring process. However, the regulations provide examples of common AI tools that, while presumably providing that hoped-for efficiency in decision-making, may also result in unlawful employment discrimination.
Some examples from the regulations include –
- An ADS “that, for example, measures an applicant’s skill, dexterity, reaction time, and/or other abilities or characteristics may discriminate against individuals with certain disabilities or other characteristics protected under the Act.” Without affording a reasonable accommodation, the ADS may result in unlawful discrimination.
- An ADS that “analyzes an applicant’s tone of voice, facial expressions or other physical characteristics or behavior may discriminate against individuals based on race, national origin, gender, disability, or other” protected characteristic, and may therefore result in unlawful discrimination.
- Employers cannot use an ADS or any “online application technology” that “limits, screens out, ranks, or prioritizes applicants based on their schedule” if it has a disparate impact on applicants based on their religion, disability or medical condition, unless the technology is “job-related and consistent with business necessity” and “includes a mechanism for the applicant to request an accommodation.”
Unlawful Medical or Psychological Inquiries
Employers are generally prohibited from requiring medical or psychological examinations before extending an offer of employment. Here, the regulations extend the term “medical or psychological examination” to those administered through an ADS. The regulations add that an ADS such as “a test, question, puzzle, game, or other challenge that is likely to elicit information about a disability” could be a medical or psychological inquiry, which may fall within FEHA’s general prohibition.
Criminal History
The new FEHA regulations also extend existing criminal history rules to “inquiring about criminal history through ... the use of an automated-decision system.”
Third-Party Agent Liability
Echoing a theme in the current litigation against Workday, the regulations expressly extend liability for ADS discrimination to employers’ agents. The regulations define “agents” to include “any person acting on behalf of an employer, directly or indirectly, to exercise a function traditionally exercised by the employer or any other FEHA-regulated activity ... including when such activities and decisions are conducted in whole or in part through the use of an automated decision system.”
Encourages Employer Proactive Anti-Bias Testing
Throughout the new regulations, California reiterates that anti-bias testing or similar proactive efforts to avoid unlawful discrimination (or the lack thereof) are relevant when asserting or defending against a claim of ADS discrimination. So, while California does not mandate that employers conduct bias audits or analyses, thorough testing of all AI tools may be the money most well-spent in the world of employment-based AI tools.
If you need assistance with AI compliance or have any questions, simply reply to this email or reach out to the Silberman Law attorney with whom you work.













