Diversity & Inclusion

AI Won’t Solve Hiring Bias—But People Can

hiring bias

In these relatively enlightened modern times, only a fraction of the business world labors under the delusion that hiring bias is something of the past. Instead, most HR experts, corporate leaders, and entrepreneurs make the more realistic assessment—the hiring game is far from evenhanded, even as organizations work to rectify the situation. The glaring issue is AI could be just as biased as some of its flesh-and-blood counterparts.

Is AI the answer to hiring bias or its own new problem? Here’s what challenges still lie ahead and a few pointers on using bots for good.

The Problems With AI Hiring

Artificial intelligence was supposed to make hiring fairer. The theory held that machines lacked implicit biases about gender, ethnicity, religion, sexual orientation, and other conditions that humans struggle to overcome. As such, according to design, AIs should be able to judge candidates based on their merits, resulting in an impartial corporate world. 

Like many technological concepts, however, the AI hiring revolution is saddled by flaws no one expected.

AIs Reflect Their Creators’ Biases

hiring biasDespite their best intentions, most humans hold biases informed by their experiences. This is why diversity hiring programs tend to rely on marginalized or underrepresented decision-makers. By letting more voices contribute to the discussion, socially responsible companies hope to foster a more productive, well-rounded dialogue.

There’s a hitch to this plan, though. The tech industry is overwhelmingly monolithic. For instance, the vast majority of computer science researchers are male. Sadly, recent analysis hints that the IT gender gap might remain unremedied for the next century—unless we take action now.

We should praise people who advocate for more diversity in the workplace. At the same time, however, it’s unrealistic to assume they can develop hiring systems that solve bias problems they’ve never experienced firsthand.

AIs Can Reinforce Bad Behavior

Many algorithms take a necessarily simplistic view of the world. Checked by computing power, memory, and other technical restraints, they must redefine complex problems using a small subset of relevant variables. In addition to making it easier to zero in on the critical limiting factors, this restricts the breadth of possible answers that an AI can come up with. The algorithm learns from a minimal data set, and it bases its decision-making on whatever information it’s fed—regardless of whether the effects are beneficial, moral, or legally sound.

This situation can exacerbate prejudicial hiring trends. Research has shown that digital hiring pipeline AIs tend not to serve ads for higher-paying, upwardly mobile positions to women. Since this is the current state of affairs, any system trained on a real-world data set might perpetuate the problem. A lack of diverse source data leads to a lack of diverse output.

Machine learning also gives employers a way around laws that prohibit the inclusion of specific interview questions. As the Harvard Business Review noted, “intelligent” hiring systems may be able to discover protected private information. It’s not a big leap to imagine an algorithm uncovering a candidate’s disability status, sexual orientation, medical history, or criminal record by looking at their social media postings and similar content.

AIs May Foster a False Sense of Progress

Algorithms don’t have the final say. In many cases, such as when employers use tools like applicant-tracking systems, the AI merely streamlines existing workflows. Humans still make the ultimate decisions on whom to hire and how to compensate them, leaving ample margin for error.

Some advocates claim that AI hiring tools do little more than present the outward signs of forward progress. For instance, a less-than-reputable boss might use their digital HR system to discriminate against protected groups. Such was the case in 2019 when the Equal Employment Opportunity Commission found seven companies guilty of excluding women and older workers from seeing their Facebook job ads.


How to create an office environment that’s inclusive of all genders.


How Can Business Owners Practice Equitable Hiring—and Thereby Leverage Top Talent?

How can hiring professionals improve the situation? Recognizing the problems with AI is just the start. Here are some actionable steps to keep in mind.

Ask Diverse Candidates for Feedback

An outside perspective is crucial to overcoming negative staffing trends. It’s easier to recognize how what you’re doing might be harmful when you communicate with those impacted by your actions.

Consider that diverse employment candidates face a litany of job search challenges that others take for granted. Asking these underrepresented professionals for their opinions on your process might be a wise move.

hiring biasUse AI Tools Geared Towards Bias Reduction

Hiring tools are a mixed bag. While some don’t do much to address diversity, others come from companies that champion inclusivity as their primary mandate. Learning more about who built your preferred tools—and their work ethic—could help you find a fairer fit. Check a software service provider’s “About Us” webpage to see what kind of people it employs and look at their press coverage.

Use AI to Expand Your Applicant Pool, Not Restrict It

Although machine learning plays a huge role in filtering large data sets, this isn’t always advisable with human capital. Instead of merely whittling down applicant pools, advocates suggest using AI to access bigger pipelines. On top of offering opportunities to more diverse populations, you might uncover a hidden talent motherlode.

This may be a tough pill to swallow, but biased hiring goes way beyond AI, unconscious programmer prejudices, or even longstanding workforce talent disparities. It’s usually symptomatic of more significant problems within hiring organizations. 

If you want to cultivate a more inclusive workplace, then start paying attention to the voices that differ from yours. Get out of your cultural, social, and economic comfort zones by accepting input from employees across the spectrum and letting them wield responsibilities that foster a greater sense of purpose and career ownership. 

Be open to criticism, and resist the urge to automatically deny that your current practices could have negative impacts. This is the best way to create a genuinely welcoming professional environment—something AI can’t do on its own.


How to define your new role as an director


 

About the Author

AHJ George is an author and self-represented studio artist. Their work broadly addresses the mediating roles of group behavior and machine-human interfaces on creative expression.