Home Tech Eightfold AI Faces Lawsuit Over Secret Candidate Rankings

Eightfold AI Faces Lawsuit Over Secret Candidate Rankings

10
0
Eightfold AI sued for alleged covert candidate ranking
Getty Images


Introduction to AI-Driven Hiring and its Challenges

The use of artificial intelligence in hiring processes has become increasingly prevalent, with many companies relying on AI-powered tools to screen and evaluate job candidates. However, a recent lawsuit filed against Eightfold AI Inc. has raised concerns about the potential risks and consequences of using AI in hiring decisions. The lawsuit alleges that Eightfold AI’s software collected personal information from unverified third-party sources, including social media profiles and location data, to create reports on job candidates without their consent or knowledge.

Allegations Against Eightfold AI

The complaint filed in the State Superior Court of California claims that Eightfold AI’s software used a proprietary large learning model to rank candidates on their “likelihood of success” based on the collected data. The lawsuit also alleges that several high-profile companies used Eightfold AI in their job candidate screening process, potentially affecting numerous job applicants. Two plaintiffs have come forward, stating that they believe they were screened out of job candidacy for positions they were qualified for due to Eightfold AI’s evaluations.

Implications of AI-Driven Hiring

The use of AI in hiring decisions raises important questions about fairness, transparency, and accountability. The plaintiffs in the lawsuit argue that Eightfold AI’s practices failed to comply with rules for consumer reports set out in the Fair Credit Reporting Act and state law. These laws require disclosure of a report being created, access to the report, and the ability to dispute the report. The lawsuit claims that Eightfold AI’s AI-generated evaluations did not provide these protections, potentially harming job applicants who were unaware of the reports being created about them.

Expert Insights and Concerns

Jenny Yang, partner at Outten & Golden LLP and former Chair of the U.S. Equal Employment Opportunity Commission, emphasizes the importance of AI companies complying with legal safeguards meant to protect workers. Yang notes that the potential harms to workers are what Congress sought to prevent when it enacted the Fair Credit Reporting Act. The lawsuit highlights the need for greater transparency and accountability in AI-driven hiring processes, ensuring that job applicants are treated fairly and have access to the information used to evaluate them.

Broader Trends in AI Adoption

The use of AI in hiring is on the rise, with more recruiters turning to AI-powered tools to meet hiring demands. According to LinkedIn, 93% of talent acquisition professionals surveyed said they are increasing AI use in 2026 to meet hiring goals and evaluate talent. While AI can potentially streamline hiring processes and improve efficiency, it is essential to ensure that these tools are used in a way that respects the rights and privacy of job applicants.

Future Directions and Considerations

As AI continues to play a larger role in hiring decisions, it is crucial to address the potential risks and consequences of these technologies. This includes ensuring that AI companies comply with existing laws and regulations, providing transparency and accountability in AI-driven hiring processes, and protecting the rights and privacy of job applicants. By doing so, we can harness the potential benefits of AI in hiring while minimizing its risks and ensuring that these technologies are used in a fair and responsible manner.

LEAVE A REPLY

Please enter your comment!
Please enter your name here