×
State AI hiring laws create complex compliance maze for employers
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The federal retreat from disparate impact enforcement in AI hiring is creating a patchwork of state and local regulations that companies must navigate, with New York City’s Local Law 144 leading the charge and state attorneys general filling enforcement gaps. This shift means businesses using AI in recruitment now face a more complex legal landscape where proving job-relatedness and monitoring outcomes has become essential, as recent landmark cases against Workday and SiriusXM demonstrate that both vendors and employers can be held liable for discriminatory AI tools.

What you should know: New York City’s Local Law 144 remains the only comprehensive AI hiring regulation in the U.S., but it has significant limitations in scope and enforcement.

  • The law requires companies to conduct independent bias audits within 12 months, post public summaries of results, and provide 10 business days advance notice to candidates before using automated employment decision tools.
  • However, the audit scope only covers race, ethnicity, and sex—missing age and disability discrimination protections that are covered under federal law.
  • The law uses the “4/5ths rule” as a screening tool, where hiring rates for protected groups must be at least 80% of the highest-performing group’s rate, but this isn’t a definitive legal safe harbor.

The big picture: Title VII’s disparate impact doctrine continues to govern private litigation regardless of federal enforcement changes, creating ongoing liability risks for companies.

  • Under Title VII, neutral screening practices that disproportionately exclude protected groups are unlawful unless employers prove the practice is job-related and consistent with business necessity.
  • Even with job-relatedness proven, plaintiffs can still prevail by showing equally effective, less discriminatory alternatives exist.
  • The April 23, 2025 Executive Order directing agencies to “eliminate or deprioritize disparate-impact enforcement” only affects federal investigations, not private lawsuits or state enforcement.

Landmark cases reshaping vendor liability: Two major cases are establishing new precedents for AI hiring discrimination lawsuits.

  • In Mobley v. Workday, a California federal court ruled in July 2024 that Workday, a hiring software provider, could be liable as an employer’s “agent” under Title VII when its tools perform traditional hiring functions like recommending candidates and auto-rejecting others.
  • The case received conditional certification in May 2025 for age discrimination claims, becoming the first federal certification order for an AI-screening case and authorizing notice to applicants over 40 who used Workday-enabled systems.
  • Harper v. SiriusXM, filed August 4, 2025, alleges the company’s AI tool disproportionately rejected Black applicants using proxy variables like education history, employment gaps, and ZIP codes.

State attorneys general stepping up enforcement: California and New Jersey are leading state-level AI discrimination enforcement using civil rights and consumer protection laws.

  • California’s Attorney General issued guidance warning that AI systems creating risks to fair competition could trigger enforcement under the state’s competition laws, specifically prohibiting disparate impact discrimination.
  • New Jersey Attorney General Matthew Platkin published guidance clarifying that the state’s Law Against Discrimination applies to algorithmic discrimination the same way as other discriminatory conduct, regardless of whether it’s automated or human-driven.
  • States are combining civil rights enforcement with unfair, deceptive, or abusive practices laws to police AI claims about being “bias-free,” “fair,” or “validated.”

What businesses must do now: Companies wanting to continue using AI in hiring need to implement comprehensive monitoring and validation processes.

  • Measure outcomes across all protected classes, not just those covered by Local Law 144’s limited scope.
  • Validate every consequential screening tool with proper statistical analysis and document validation evidence.
  • Require AI vendors to provide transparency about their algorithms and be prepared to “show their work” in potential litigation.
  • As Sara H. Jodka, the author and a regular contributing columnist on privacy and data security for Reuters Legal News, notes: “Failure to take proactive steps to ensure the technology being used is void of bias is not anti-innovation, it is ensuring your model does not someday become Exhibit A.”
Stepping into the AI void in employment: Why state AI rules now matter more than federal policy

Recent News

Google AI exec argues AI systems have consciousness and free will

The book arrives as questions about machine consciousness become urgent for policymakers.

Chattanooga startups use AI robots to tackle healthcare worker shortages

Hospital hallways may soon buzz with robotic assistants carrying medical supplies.

Micron surges 140% on AI memory chip demand despite China exit

The memory chipmaker trades at just 24 times earnings while analysts project doubling profits.