Deep Learning in Action

Formative Activity

AI is quietly reshaping how companies look for and hire talent, at first completely covert, although many candidates are now aware that technologies like Applicant Tracking Systems (ATS) are the ones in charge of dropping or passing their resumes. Instead of a human recruiter scanning resumes and conducting interviews, much of the hiring process is now being handled by AI tools. In fact, we are now seeing an AI arms race, with candidates turning to the same technologies to gain an edge (Newsweek, 2025). Tools powered by AI use natural language processing (NLP) and machine learning (ML) to identify key skills and experiences, looking to rank applicants by how closely they resemble the job profile (Kodiyan, 2019).

This might seem like a practical improvement: hiring is faster and more cost-efficient, and recruiters can discover candidates from multiple platforms (Li et al., 2023). Yet the situation is more nuanced than that. The AI systems are only as capable and unbiased as the data used for their training. In one well-documented case, Amazon’s AI hiring tool learned from a historic dataset dominated by male employees. The result was an unintentional reinforcement of gender inequalities, with female candidates being pushed aside because they simply had a difference in vocabulary or activities (Kodiyan, 2019).

Of course, it turns out that gender bias is not the only issue of AI-driven recruitment. Human rights concepts like autonomy, privacy, transparency, and non-discrimination are all on the line (Hunkenschroer and Kriebitz, 2022). Candidates rarely know how decisions are being made, more often than not ending up with an AI-written email thanking them for the interest, and HR professionals themselves struggle to interpret the AI’s recommendations or rely too much on them (Li et al., 2023). This new approach promises efficiency without any major changes, but if it continues on without proper rules and regulations, it risks perpetuating inequalities and eroding trust in hiring processes.

Ultimately, AI in recruitment is a double-edged sword. It could provide more equal opportunity, although in practice it is still far from that, yet it can also deepen human bias and lack of fairness and transparency if used carelessly. Like any powerful early-stages technology, it demands careful ethical analysis, data management, and ongoing examinations to ensure it helps rather than harms.

References
  • Hunkenschroer, A. L. and Kriebitz, A. (2022) ‘Is AI recruiting (un)ethical? A human rights perspective on the use of AI for hiring’, AI and Ethics, 3, pp. 199–213. Available at: https://doi.org/10.1007/s43681-022-00166-4 (Accessed: 20 October 2025).
  • Li, L., Lassiter, T., Oh, J. and Lee, M. K. (2023) ‘Algorithmic Hiring in Practice: Recruiter and HR Professional’s Perspectives on AI Use in Hiring’, University of North Carolina / University of Texas. Available at: https://dl.acm.org/doi/10.1145/3461702.3462531. (Accessed: 20 October 2025).
  • Kodiyan, A. A. (2019) ‘An overview of ethical issues in using AI systems in hiring with a case study of Amazon’s AI based hiring tool’, November. (Accessed: 20 October 2025).