Applicant Tracking System Ethics

Explore top LinkedIn content from expert professionals.

Summary

Applicant-tracking-system-ethics refers to the moral principles and fairness concerns involved in using software that automates resume screening and hiring decisions. These systems can unintentionally create bias, limit diversity, and prioritize rigid criteria over human potential, making it crucial to examine how technology impacts the recruitment process.

  • Question AI scoring: Always review prompts asking you to opt into artificial intelligence scoring during job applications and consider opting out if you are unsure about how your data will be used.
  • Advocate for transparency: Encourage employers to explain how their applicant tracking systems work and request clarity on what criteria are used to filter candidates.
  • Support human judgment: Remind decision-makers to balance automated screening results with thoughtful human evaluation to avoid missing out on unique, qualified candidates.
Summarized by AI based on LinkedIn member posts
  • View profile for Gary Silbermann

    Recruitment Director @ acuityconsultants.jobs

    19,901 followers

    Applicant Tracking Systems (ATS) have done more damage to recruitment than good. They were sold to companies as the holy grail of hiring: efficiency, streamlined processes, "talent mapping" (seriously, does anyone actually know what that means?!). But instead of finding talent, they've turned the whole process into an industrial-scale bias machine. Why I say this: - Keyword Bingo: Candidates now write CVs for robots, not humans. If you don't sprinkle in the "magic words", the system spits you out before anyone with an actual brain sees your profile. Great candidates are lost because they didn't say "stakeholder engagement" 17 times. - The Application Counter of Doom: ATS systems proudly flag how many times a candidate has applied to your company. Talent Acquisition Specialists see "applied 7 times" and suddenly it means: unemployable. No thought that maybe, just maybe, the previous hiring manager missed a trick, wrote a vague job spec, or couldn't be arsed to read the CV. Nope, the machine says "7" so clearly the human must be worthless. - Bias on Steroids: Some ATS platforms allow filtering by things like university attended, location, or arbitrary "years of experience". That's not removing bias, that's giving recruiters a prettier dashboard to automate their bias. - Formatting Crimes: God forbid your CV has an unusual font, a table, or a creative layout. ATS will scramble it into modern art and then reject you for being "incomplete". So much for valuing creativity. - Ghosting Made Easy: ATS systems don't just make it easy to apply, they make it even easier to ignore candidates at scale. You don't even have to open an application to reject it. With one click, 200 hopefuls go straight into the black hole. Efficiency! And yet, companies cry "talent shortage". No. What we have is a shortage of common sense. Because here's the thing: recruitment is, at its core, about humans making human decisions about other humans. You cannot automate judgment, gut feel, or the ability to spot raw potential. But sure, let's keep rejecting brilliant candidates because they applied one too many times or forgot to bold "Agile" on page 2. That'll definitely fix the skills gap. ATS isn't helping companies hire better. It's just helping them reject faster. *

  • View profile for Justin Dux

    One dumb idea away from brilliant.

    7,486 followers

    Job seekers, be aware: some applicant tracking systems, like ADP, automatically prompt you to opt into AI scoring. Never agree to this. Original text from an ADP-hosted job application: "Please note that during the application process, you may be prompted to opt-in to Artificial Intelligence (“AI”) scoring. This is an automatic request in the ADP system but is not a feature that is used by [Employer name] in its recruiting and hiring process. Your response will have no impact on your application to [Employer name]." Employers will opt in and provide historical data from past applications. But this opt-in request isn’t about improving hiring—it’s a legal maneuver. If you consent, companies can later argue you accepted the risks of AI-driven discrimination, limiting your ability to challenge unfair hiring practices. AI hiring systems have no legal guardrails and reinforce existing biases. The most common languages and resume formats are favored, often benefiting men while filtering out women and underrepresented candidates. Until AI hiring can prove its fairness and transparency, we should not be providing it with more data to train on—especially if that data is being used against us. Do not indemnify these companies. Opt out of AI scoring wherever possible. LinkedIn also has an AI review feature—turn that off too. Protect yourself. Demand accountability. Opt out. I want to see a future for AI technology that is positive but I do not support indemnifying their early mistakes. #DoNotIndemnifyAI

  • View profile for Chris P.

    Medical Device & Pharmaceutical Sales

    11,975 followers

    Is It Time to Rethink ATS? Over the past two weeks, I ran a personal experiment to test the effectiveness (and fairness) of Applicant Tracking Systems (ATS). I sent out 20 blind resumes, carefully optimized with keywords for roles I am well overqualified for. Here’s some context: • I’m attending MIT’s Global Thought Leadership Program. • I’m highly networked on LinkedIn and performing well in my current role. • I’ve honed expertise across MedTech, AI, and leadership. • Each resume was carefully tailored and optimized for specific job postings. The result? Not a single call back. This isn’t about me—it’s about the deeper issue with ATS and how AI-driven hiring systems operate. These systems prioritize rigid keywords, cookie-cutter career paths, and automated filters over actual human potential. The problem: • Bias: ATS favors the same patterns and backgrounds, often shutting out qualified candidates. • Lack of nuance: AI can’t always grasp transferable skills or unique experience. • Missed opportunities: By filtering out nontraditional candidates, companies may lose out on exceptional talent. If this is happening to someone with a solid track record, how many incredible candidates are being overlooked every day? I believe it’s time to rethink how we use AI in hiring. Systems should empower human potential—not limit it. What are your thoughts? Let’s discuss. #ArtificialIntelligence #HiringBias #Recruitment #FutureOfWork #CareerDevelopment #JobSearch #AIInRecruitment #HumanResources #DiversityAndInclusion #Technology #Innovation #Leadership #JobMarket #MedTech #WorkplaceCulture

  • View profile for Jennifer Sethre

    CEO | HR Tech | SaaS | AI Tech Solutions Combining Humanity and Technology | Outplacement Alternatives

    12,459 followers

    I mentioned this in our 'Live' event earlier today, but here’s a more granular explanation of how Applicant Tracking Systems (ATS) can unintentionally lead to biased outcomes. Here’s how it can happen: Algorithmic Bias ATS algorithms are designed to screen resumes based on criteria set by employers, such as keywords, skills, experience levels, and qualifications. However, if these criteria are not selected with diversity in mind, the ATS might unintentionally favor certain groups over others. For example, if the criteria include terms or phrases more commonly used by one demographic, the system may overlook equally qualified candidates from different backgrounds. Programming and Design Flaws The programming and design of an ATS can introduce bias, especially if it is not thoroughly tested across diverse scenarios or lacks the sophistication to understand nuanced human factors. For instance, an ATS that misinterprets non-traditional resume formats or creative layouts can disproportionately exclude candidates who use these styles. Historical Data Usage Many ATS algorithms are trained on historical hiring data. If this data reflects past biases or non-inclusive practices, the system may perpetuate these biases by identifying 'ideal' candidates based on profiles that mirror those previously selected, rather than evaluating all candidates based on unbiased potential. Keyword Dependence Over-reliance on specific keywords can disadvantage qualified candidates who do not use the exact terms specified in the ATS, even though they have the necessary skills and experiences. This filtering can particularly affect older candidates or those from diverse cultural or educational backgrounds who may describe their qualifications differently. Cultural and Socioeconomic Factors ATS may inadvertently discriminate based on how different groups describe their experiences or present their backgrounds. Variations in language, terminology, or resume structure common to various cultures or socioeconomic groups might not be recognized or valued by a standardized system. These are just a few examples of how ATS can unintentionally introduce bias into the hiring process. That’s why Intry is here to help—our tools are designed to mitigate these issues and ensure that every candidate has a fair chance. #ATSresumes #resumehelp #jobseekers #jobsearch #LandingYourDreamJob #resumewriters" www.intrycareers.com

  • 🚨 Update on the BIGGEST ever legal case in HR In Mobley v. Workday, the court just conditionally certified one of the largest collective actions in employment law history - affecting hundreds of millions of job applicants. For those that don’t know about this case - the allegation is that Workday’s AI-driven applicant screenin discriminates based on age, race, and disability, denying people the right to compete on equal footing. Why does this matter? ➡️ AI is now core to how companies sort, rank, and reject candidates. ➡️ But when trained on biased data - or reflecting biased design - these tools can amplify existing inequalities. ➡️ With 1.1 BILLION applications processed by Workday during the relevant period, the scale of harm could be massive. What’s actually happening? (Boring tech jargon incoming) Under the hood, these AI systems use machine learning models trained on historical hiring data to predict which candidates are “most likely” to succeed. If the historical data reflects biased hiring patterns (e.g. favouring employed, younger, male, or non-disabled candidates), and the model learns to optimise for past outcomes, it will replicate those same biases in future hiring decisions. These systems use natural language processing (NLP) to analyse CVs, and ranking algorithms to score applicants - essentially automating the shortlisting process based on patterns in data, not people. This case is a massive wake-up call. If you work in HR, you should be watching this like a hawk (set up Google news alerts for it!). Employers will soon legally have to take responsibility for the tools they deploy - even if those tools are built by third parties. At ivee | The return-to-work platform we are building ethical AI in our models - we surface candidates with non-linear careers that we KNOW other systems are automatically rejecting. Anyone got any interesting takes on this case and what it means for the future of hiring? Interested to hear from Thomas Otter Jane Hamilton Lucy Tobin Richard Fletcher Jon Yeomans Jess {Akano} Bamtefa #AIBias #FutureOfWork #HiringEquity #HRTech #Workday #ReturnToWork #DEI #EmploymentLaw #GenderPayGap #ResponsibleAI #ivee

Explore categories