Yesterday, Reuters reported that Amazon created a recruiting engine using artificial intelligence. This isn’t news. Amazon is a leader in automation, so it makes sense that the retail giant would try automation in their own recruiting processes to try to quickly find the “best” candidates. Yet, Amazon’s tool had a big problem – it didn’t like women.
As the article describes, “Everyone wanted this holy grail,” one of the people said. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.” Who doesn’t want this? To make hiring faster and easier? Currently, there are hundreds of AI tools available to human resources – many of them in the recruiting space – that promise to do these things for you. But if Amazon found problems, what about those tools?
Amazon’s tool used a 10-year look back of existing employees (largely male-dominated). The tool then could rank applicants based on what it learned makes a good Amazonian. Based on its own analysis, the tool learned that male candidates were preferred over female candidates in a mixture of words that appear on applications, like “women’s,” experience, job requirements, and potentially proxies for gender. While Amazon tried to solve for this problem – making “women’s” a neutral word so the tool did not reduce the applicant’s rank – the results of the tool still had a negative impact on women. So, in 2015, Amazon abandoned the tool. Good for Amazon. This is the right thing to do. But again, there are hundreds of other AI tools out there.
At this year’s HR Tech Conference in Las Vegas, my friend Heather Bussing and I presented on this very topic. We spoke about how AI can both amplify and reduce bias. Here are a few of the highlights:
- We know that AI is biased because people are biased.
- We know the sources of the bias include the data we use to teach the AI, the programming itself, the design of the tool, and people who create the tool.
- Employers have to be vigilant with their tools. We have to test for bias and retest and retest (and retest) for bias in our tools.
- Employers – not the AI – are ultimately responsible for the results of the tool, because even if we follow the output of the tool, the employer is making the ultimate employment decision.
It is very possible, even probable, that the tools out there on the market have bias in them. Employers can’t simply rely on a vendor’s salesperson’s enthusiastic declarations that the tool eliminates bias. Instead, employers should assume bias plays a factor and look at their tool with a critical eye and try to solve for the problem ourselves.
I applaud Amazon for doing the right thing here, including testing its tool, reviewing the results, and abandoning the tool when it became clear that its bias played a part the results. This isn’t easy for every employer. And, not every employer is going to have the resources to do this. This is why employers have to be vigilant and hold their vendors accountable for helping us make sure bias isn’t affecting our decisions even when using an AI tool. Because ultimately, the employer could be liable for the discrimination that the tools aid.
Published by