As a recruiter, I have been against using AI in the applicant/candidate selection process. While I believe automating a “thank you for your resume” note to set expectations is helpful, I’m not a fan of using AI to screen, let alone decide who moves forward.
But AI isn’t just a shiny object in hiring anymore. It has moved from “nice to have” to “hard to avoid,” which is a bit unsettling to me. Resume screening tools, candidate matching platforms, and predictive assessments promise faster decisions. But at what cost?
For busy firms and growing businesses, the appeal is obvious. AI can streamline hiring processes and reduce some serious recruiting pain points.
AI tools can screen thousands of resumes in seconds, identify qualified candidates more quickly, and reduce time-to-hire. That’s a significant operational advantage.
Good systems use the same rules for everyone, so you’re not just winging it. That kind of consistency makes things fair, at least when people are watching and checking the process.
Advanced platforms analyze skills, experience, and career trajectories rather than relying solely on job titles or cookie-cutter resumes. Done right, this can broaden candidate pools and support more inclusive hiring.
AI can identify patterns in successful hires and can flag candidates who may perform well based on defined competencies.
A recent lawsuit involving Eightfold AI highlights an important truth. In some cases, AI amplifies bias, and that can lead to potential legal and reputational risks.
AI learns from what people feed it from the internet and other training data. If the data going in was biased in any way, the AI algorithm may replicate and even exaggerate the bias.
When AI tools are used in candidate selection, recruiters and employers may not know exactly how the algorithm evaluates, ranks, screens, or filters out applicants. That lack of knowledge makes it difficult to explain or justify hiring decisions. That becomes a serious problem when decisions are challenged.
When AI outputs are treated as final decisions rather than decision-support tools, human judgment is sidelined. This increases legal exposure and undermines good hiring practice.
Employment laws were written to protect people, not algorithms. Employers remain responsible for discriminatory outcomes, even if those outcomes are produced by third-party technology.
The class action lawsuit filed on January 20, 2026, against Eightfold AI brought national attention to the risks of using AI hiring tools. The plaintiffs alleged that the platform’s matching and ranking algorithms can unfairly block certain applicants from advancing to a human for review. And since the applicants were never told about what data was being gathered, the source of that data, or how they were being ranked, it violated data collection and consumer protection laws.
While the case focused on the technology provider, the broader implication is clear: employers can’t outsource liability to AI vendors.
Courts and regulators are increasingly willing to scrutinize how AI tools make decisions, what data they rely on, and whether employers exercised appropriate oversight. If your hiring process disadvantages protected groups, “the algorithm did it” is not a defense.
So, the question is, “How do we use AI responsibly, transparently, and in compliance with employment law?”
When AI is used in candidate selection, several legal frameworks come into play:
Title VII of the Civil Rights Act: Prohibits discrimination based on race, color, religion, sex, or national origin. Disparate impact claims apply even without intent.
Age Discrimination in Employment Act (ADEA): Protects workers aged 40 and over. Algorithms trained on age-correlated data can easily trigger violations.
Americans with Disabilities Act (ADA): AI assessments that screen out candidates with disabilities, intentionally or not, can create compliance issues.
State and local AI laws: Jurisdictions such as New York City now require bias audits and candidate notifications for automated employment decision tools.
These risks extend beyond lawsuits. They affect employer brand, trust with candidates, and internal culture.
AI can be a powerful assistant, but it should never be the decision-maker.
AI should support hiring managers, not replace them. Final hiring decisions must involve trained humans who can question, override, and contextualize AI outputs.
Employers should understand how tools work, what data they use, and how bias is tested and mitigated. If a vendor cannot explain their model, that is a red flag.
Conduct ongoing bias and adverse impact audits. Do not assume compliance based on initial setup or vendor assurances.
Clear documentation of how AI is used, what it influences, and how humans intervene is critical for legal defense and ethical accountability.
AI works best when paired with structured, competency-based interviewing and clear success criteria. Technology should enhance thoughtful hiring, not shortcut it.
AI in candidate selection is neither inherently good nor inherently dangerous. It is a tool. Like any tool, its impact depends on how it is designed, governed, and used.
The lawsuit is a reminder that innovation without guardrails carries real consequences. Firms that act responsibly will treat AI as an aid to better decisions, not a substitute for judgment, empathy, or accountability.
In hiring, as in leadership, people-first is not optional. It’s a risk management strategy.