Skip to main content

AI in Employment & Hiring: Legal Considerations for Kentucky Employers

Stites & Harbison Client Alert, April 2, 2026

Artificial intelligence (“AI”) is transforming how Kentucky employers across all industries recruit, screen, and manage their workforce. With federal agencies, courts, and nearby states issuing new rules and guidance, Kentucky employers must understand the opportunities and risks associated with AI in employment decisions.

The Promise and Risks of AI in Employment

Employers are increasingly deploying AI systems across multiple sectors of decision-making and administrative functions. AI in hiring, pay, promotion, and workforce management brings the promise of faster and more consistent applicant screening, improved candidate matching for hard-to-fill roles, and nuanced analytics for workforce management decision-making. But with that increased reliance comes the risks of disparate impact discrimination, the exclusion of nontraditional candidates, confidentiality concerns, and other issues that can result from data-trained algorithmic bias and the use of third-party trained technology. AI’s automation only amplifies the gravamen of these risks, as the effects of the errant use of AI can quickly spread, resulting in companywide impacts.

Employers must remain diligent when selecting AI technology, as well as training and deploying it in their businesses. Employers must also stay up-to-date with the increasing amount of legislation addressing the subject.

Federal Legal Landscape

Litigation throughout the country highlights the tension between AI-driven employment decisions and federal nondiscrimination laws, including Title VII, the Age Discrimination in Employment Act (“ADEA”), and the Americans with Disabilities Act (“ADA”). Recent cases illustrate the significant risks employers face when AI tools produce potentially discriminatory outcomes.

  • Mobley v. Workday, Inc., 2025 WL 1424347 and 740 F. Supp. 3d 796 (N.D. Cal. 2025): Mobley, a job applicant, initiated this action against Workday, a provider of human resources management and applicant screening technology, alleging that the company incorporated AI into its algorithmic decision making tools used by employers across multiple industries. According to the complaint, Mobley applied for more than 100 positions with employers that relied on Workday’s screening tools since 2017. Despite his qualifications, Mobley alleged he was rejected for every position. Mobley asserted that Workday’s algorithmic tools resulted in unlawful discrimination in violation of Title VII, the ADEA, and the ADA, alleging that applicants who are African American, over age 40, and/or disabled were disparately impacted. Workday moved to dismiss the case, but the court allowed Mobley’s disparate impact claims to proceed, and the Northern District of California subsequently granted preliminary class certification. The Workday case underscores that the use of AI in hiring and recruitment implicates possible disparate impact claims under Title VII, the ADEA, and the ADA.
  • Harper v. Sirius XM, LLC, 2:25-CV-12403 (E.D. Mich. 2025): Harper, another job applicant, sued Sirius XM, alleging that the company used algorithmic decision-making tools, including AI provided by a third-party vendor, to screen and reject job applicants in a manner that intentionally and disproportionately disqualifies African-Americans from securing employment. Harper specifically advanced disparate treatment and disparate impact claims against Sirius XM, alleging both intentional discrimination and unintentional discriminatory effect. Although the case is still ongoing, Harper reinforces the idea that contracting with a third-party vendor for AI services can lead to potential lawsuits against the employer even though the employer did not personally implement the AI practices.

Cases like Workday and Harper demonstrate that AI systems trained on past hiring data may perpetuate or amplify historic biases (e.g., age, gender, race, disability). If these systems disproportionately screen out applicants who are members of protected classes, employers using the technology may face liability under Title VII, the ADA, the ADEA, other federal laws, and state law corollaries. Moreover, in specialized industries such as a hospital network setting, where employment decisions are transparent and highly regulated, there is an increased risk of litigation, reputational damage, or regulatory action.

In addition the ever-growing amount of AI litigation, employers also need to be cognizant of the policy changes made by the Trump Administration related to AI.

  • Trump Administration Implications: President Trump’s Executive Order “Restoring Equality of Opportunity and Meritocracy,” proclaims the policy of the U.S. to eliminate the use of disparate-impact liability. It is unclear how this directive will impact litigation based on AI in hiring at this time, but the Executive Order does not erase the existing disparate impact jurisprudence or restrict private lawsuits. So, while this may mean that the EEOC no longer pursues disparate impact claims, it is up to Congress or court decisions to change employer liability in the area.
  • The Federal Trade Commission (“FTC”): The FTC, the agency responsible for enforcing the Fair Credit Reporting Act, previously published tips for companies using AI and algorithms. Although that guidance has since been removed by the second Trump administration, the general principles remain sound and relevant to employers seeking to avoid discriminatory and biased outcomes. Specifically, the FTC recommends that businesses using AI should: (1) review its data set for gaps and supplement missing data as necessary; (2) monitor for discriminatory outcomes; (3) embrace transparency and independence, by publishing results and considering outside audits; (4) not exaggerate or over-promise unbiased results, as doing so may be a deceptive trade practice; (5) disclose how they are using captured data; (6) ensure that the AI tools do more good than harm; and (7) remain accountable for algorithmic results, or risk an FTC investigation or enforcement action.

Ultimately, the federal landscape is unclear, but the use of AI in employment and hiring decisions still creates the risk of litigation. Employers should monitor their AI use to ensure that a specific protected class is not overlooked or automatically rejected by the algorithm.

Kentucky’s Current Legal Framework

Kentucky has not yet enacted laws regulating AI use in private-sector hiring. However, Kentucky courts typically interpret the Kentucky Civil Rights Act in line with federal law, meaning that AI-related disparate impacts or discriminatory outcomes could still expose employers to liability.

SB 4, signed by Governor Andy Beshear in 2025, directs the Commonwealth Office of Technology to develop ethical AI standards for state government. The Commonwealth Office of Technology must also prioritize privacy and the protection of individuals’ and businesses’ data, considering and documenting possible discrimination, oversight, risks, and benefits of the use of generative AI. While this does not directly regulate private employers, it signals increasing legislative attention to AI oversight in Kentucky.

Third-Party Vendor Concerns

If not thoughtfully monitored, confidential employee information can be shared with learning models and reach outside an organization, particularly when AI is used for internal investigations, disciplinary actions, medical leave and accommodation requests, as well as resume screening and background checks. The Workday holding “[w]here the employer has delegated control of some of the employer’s traditional rights, such as hiring or firing, to a third party, the third party has been found to be an ‘employer’ by virtue of the agency relationship,” suggests that agent-vendors of employers can be held liable under Title VII and the ADEA (citation omitted). Additionally, following the logic of Workday, employers can also be held liable for the actions of their agents, thus highlighting the importance of indemnity clauses in AI vendor contracts.

Suggested Best Practices

As the federal and state landscapes continue to evolve, employers should remain diligent in monitoring how AI driven tools are trained, governed, documented, and audited. Litigation and regulatory materials demonstrate that insufficient documentation, minimal human oversight, or unquestioned reliance on third party vendor tools can heighten legal exposure, even absent any discriminatory motive. It remains the responsibility of the employer to explain employment outcomes and to demonstrate that AI-driven tools are maintained with ongoing, meaningful oversight.

To that end, to reduce AI-related risk, we suggest that Kentucky employers consider adopting the following practices:

· Maintain human oversight for all final employment decisions;
· Ensure AI tools are trained on nondiscriminatory, representative data;
· Audit AI systems regularly for disparate impact;
· Document AI use and decision-making workflows;
· Review vendor contracts and include strong indemnity clauses; and
· Implement internal AI usage policies with clear permissible/prohibited practices.

Conclusion

AI can greatly enhance efficiency and hiring outcomes for Kentucky employers, but it must be deployed responsibly. Clear policies, strong vendor oversight, and consistent monitoring are essential to reducing legal exposure while maximizing the value of AI-driven tools.

Stites & Harbison’s Employment Law Attorneys regularly provide clients with creative, competitive, and cost effective employment law solutions. For questions, comments, or assistance with any employment-related matters, including the use of AI-driven tools, please contact the authors or any of member of the employment group.

Contact

Gumbel_Rachel_BIO

Attorney

Rachel

Gumbel

502-681-0476

Havens_Harlee_WEB

Attorney

Harlee

P.

Havens

859-226-2219

Hurst-Sanders_Jackson_WEB

Attorney

Jackson

B.

Hurst-Sanders

502-681-0415

Related Capabilities