From résumé screeners to video‑analysis software, automated hiring systems are facing scrutiny for disadvantaging ADOS applicants, disabled workers, veterans, and older job seekers.
Written By Shen Pe Utz Taa-Neter
When students at SUNY Niagara begin applying for jobs or internships, many assume a hiring manager will be the first to read their résumé. Increasingly, that isn’t the case. Employers across industries now rely on AI‑driven screening tools to sort applications, score video interviews, and identify “ideal” candidates before a human ever enters the process. While these systems promise efficiency, researchers and legal experts warn they may unintentionally filter out qualified people—especially those from historically marginalized groups.
AI‑based hiring tools are marketed as objective, but recent investigations show they can reproduce the same inequities found in the job market. For American Descendants of Slavery (ADOS), people with disabilities, veterans transitioning to civilian work, and older job seekers, these systems may amplify existing disadvantages rather than reduce them. With many SUNY Niagara students preparing to enter the workforce, understanding how these tools operate—and where they fall short—is becoming increasingly important.
Applicant Tracking Systems, or ATS, are now standard in large companies. These systems scan résumés for keywords, job titles, and formatting patterns that match employer preferences. A résumé that doesn’t use the exact phrasing the system expects may be ranked lower, even if the applicant has the right experience. Some tools also evaluate writing style, grammar, and “fit,” using models trained in past hiring decisions. As law professor Pauline Kim explained in an interview about automated hiring, “If the AI is built in a way that is not attentive to the risks of bias… then it can not only perpetuate those patterns of exclusion, it could actually worsen it.”
Several real‑world cases illustrate how these systems can fail. Amazon discontinued an internal recruiting algorithm after discovering it downgraded résumés containing the word “women’s” and penalized graduates from women’s colleges. The company found that the model had learned from historical hiring data dominated by male applicants, leading it to favor résumés that resembled past patterns.
HireVue, a major vendor of video‑interview software, faced scrutiny after researchers found its facial‑analysis and speech‑scoring tools showed bias against certain accents and expressions. Advocacy groups raised concerns that the system could misinterpret the communication patterns of non‑white applicants or applicants with disabilities, potentially lowering their scores before a human ever reviewed their interview.
A class‑action lawsuit filed against Workday, a widely used HR software provider, alleges that its AI‑enabled screening system disproportionately rejected African‑American applicants, older applicants, and applicants with disabilities. One plaintiff, Derek Mobley, claimed he was rejected from more than 100 jobs over seven years due to automated decisions. The lawsuit argues that the system’s design allowed discriminatory patterns to emerge at scale.
For ADOS applicants, the concerns are specific and well‑documented. AI models trained on historical hiring data may replicate patterns that excluded ADOS workers in the past. A University of Washington study found that in AI‑assisted résumé screenings across nine occupations, the technology favored white‑associated names in more than 85 percent of cases. In some settings, ADOS male applicants were disadvantaged compared to white male applicants in up to 100 percent of cases. “You kind of just get this positive feedback loop,” said Kyra Wilson, the study’s lead author, describing how biased data can reinforce biased outcomes.
Older workers face their own challenges. In one EEOC settlement, an AI hiring system used by iTutorGroup was found to have automatically rejected applicants over age 55. In the Workday lawsuit, age was one of the central claims, with plaintiffs arguing that the algorithm screened out older applicants before a human ever reviewed their materials.
Veterans and disabled applicants encounter additional obstacles. Military job titles often do not match civilian keywords, causing ATS systems to mis-rank otherwise strong candidates. A veteran who managed logistics for hundreds of personnel may be overlooked because the system doesn’t recognize the terminology. Disabled applicants may be penalized by video‑analysis tools that misread their facial expressions or speech patterns. Employment gaps—common for those recovering from injuries or navigating disability accommodations—may also trigger negative scores.
Despite these concerns, AI hiring tools are becoming more common, not less. Career services staff at SUNY Niagara note that students increasingly submit applications through automated portals, often without realizing how much of the process is handled by software. Understanding how these systems work can help applicants navigate them more effectively: using clear keywords, tailoring résumés to job descriptions, and requesting accommodations when needed.
AI will continue shaping the hiring landscape, but the question of fairness remains unresolved. For students preparing to enter the workforce, awareness is a form of preparation. As employers adopt more automated tools, the challenge is ensuring that efficiency does not come at the cost of equity—and that qualified applicants are seen for their abilities, not filtered out by an algorithm.

Leave a comment