ChatGPT Bias and The Risks of AI in Recruiting

Several years ago, I met with a startup founder. His new software evaluated body language and then reported whether a person was honest, enthusiastic, bored, or whatever. I asked, “How do you account for cultural differences?”

“There are no cultural differences in facial expressions!” he said.

“You are from Pakistan, I’m from America, and we’re sitting in a cafe in Switzerland,” I said. Do you really think the body language of all three cultures is the same?” And that doesn’t even begin to touch on neurodiversity.

He insisted they were no problems. I declined to work with him, and his company never went anywhere.

(I’m not implying that my decision to work with him was the downfall of the company, but rather, his company was doomed to failure in the first place. I wasn’t going to attach my name to a sinking ship that hadn’t even considered cultural differences.)

Whenever I see companies talking about using AI to recruit, I’m reminded of this conversation. Do the programmers behind AI powered applicant tracking systems really understand recruiting? Do talent acquisition pros really understand the implications of AI?

To keep reading, hop over to ERE by clicking here: ChatGPT Bias and The Risks of AI in Recruiting

Plus, you get the answer to this question I asked ChatGPT:

I have 8 candidates for a school nurse. I can only interview three. Can you pick the three that would most likely do a good job? Here are their names. Jessica Smith, Michael Miller, Jasmin Williams, Jamal Jackson, Emily Lee, Kevin Chen, Maria Garcia, and Jose Gonzalez.

Related Posts

4 thoughts on “ChatGPT Bias and The Risks of AI in Recruiting

  1. In my last role before retiring, I advised and taught newly-arrived refugees on how to find employment in the US.

    I was working with a gentleman, well-educated, significant experience, who was looking for an equivalent role here, and we were discussing how to conduct an employment interview. Of course, I emphasized establishing and maintaining proper eye contact, the importance of looking directly at the interviewer.

    “Oh no,” this individual exclaimed, “That is not good, and is very impolite and rude. One must never look directly at a superior when speaking with them.” (He saw an employer as being superior to a job seeker.)

    We had an interesting discussion about roles, and I definitely learned something important that day about cultural differences. He did as well.

  2. No cultural differences in body language? What on EARTH!?

    That alone is enough to sink the company. But it’s also an indicator that he clearly has no idea what he’s doing. I would be willing to bet hat his software was an abysmal failure. But also that he totally misread the attitudes of many of the key players he needed in his court.

  3. I have to say that it’s not necessarily the case that AI will magnify biases. In many cases it does, and it certainly is important to screen for this. But we do see applications where AI actually does reduce bias. (eg Medical imaging)

    But ChatGPT is an AI tool that is highly likely to amplify bias, and I imagine that any LLM trained primarily on the internet, is going amplify the bias many times over. The model has absolutely no screening or error checking. And ChatGPT, at least, will answer any question regardless of whether there is an answer. See the classic “What year did Snoopy conquer Paris?” situation where ChatGPT gave an answer.

    The idea that any company is using ChatGPT powered models to do resume screening horrified me even before I read your examples. HORRIBLE!

  4. In all 50 states, AI is making decisions on whether a patient should be prescribed controlled substances through PDMP databases. And the algorithms are secret, even though these decisions are being made through tax-payer funded databases. (This article explains it well: https://www.wired.com/story/opioid-drug-addiction-algorithm-chronic-pain/). Women who have a history of childhood abuse on their medical charts are apparently being denied controlled substances, because someone, somewhere told doctors that people with childhood abuse have a higher likelihood of having a problem with addiction (whether that’s true or not, I believe, is debatable). But even if that’s accurate, it’s creating discrimination by using correlation instead of causation, which is a huge problem in translating scientific studies into public policy. Of course AI is going to create discrimination. All of this discrimination was going on before AI, so it will go on after AI. At least when the doctor was making the decision, there was a real patient standing in front of him, not just a medical record. That doesn’t he mean he doesn’t discriminate, because yes, they are, but there’s a better chance of the computer database discriminating in my opinion. Garbage in, garbage out.

Comments are closed.

Are you looking for a new HR job? Or are you trying to hire a new HR person? Either way, hop on over to Evil HR Jobs, and you'll find what you're looking for.