Robots are Racist and Sexist: AI Isn’t Ready for Recruiting

The big problem with recruiting and hiring is that it’s done by humans, and humans make mistakes and are subject to our own biases. So, the best solution is to create programs that evaluate candidates and choose the best person. Problem solved!

But, the latest research in artificial intelligence shows us that even robots have bias problems.

Researchers used a neural network called CLIP to control a robot’s arm. The arm then divided blocks with people’s pictures into different categories. Easy enough, right? But when researchers asked the robot to pick the doctors, janitors, murderers, and homemakers out of the person bedecked blocks, the robot demonstrated “toxic stereotypes.”

To keep reading, click here: Robots are Racist and Sexist: AI Isn’t Ready for Recruiting

Related Posts

3 thoughts on “Robots are Racist and Sexist: AI Isn’t Ready for Recruiting

  1. The fundamental misunderstanding of AI that prevades this and some many other comments on it is that somehow AI is any sense of the word responsible in an independent way. AI is nothing more than a piece of if-them logic combined with the processing speed of electronics. Humans make AI system. Human are ultimately responsible for AI. The algorithm encoded into AI software are what is not ready.

    Recruiting is not a simple process – and takes human clearness to do well. Much more in my book “Recruiting Realities: Avoiding Bad Hires” on what works in recruiting, why and how. Download it here http://www.performance-challenge-recruiter.com

    Roelf

  2. If I understand correctly (and I may not), the AI/Robot is not a thinking/decision-making entity.

    Thus, to say that “Robots are Racist and Sexist” is at best misleading and at worst is disingenuous.

    What is most likely happening here (I don’t know for sure), is that the AI/Robot is searching through data and simply reporting (deciding on the basis of) what it finds.

    We read that, “… when researchers asked the robot to pick the doctors, janitors, murderers, and homemakers out of the person bedecked blocks, the robot demonstrated ‘toxic stereotypes.'”

    I think (and again I could be wrong) that what is happening here is that the AI/Robot has been programmed to identify pictures by a rather limited set of characteristics (such as gender-color-apparent age-ethnicity), and then has been programmed to search through real-world data to find and categorize matches.

    For example, the AI/Robot finds a picture of a black+Asian+female+middle-aged person, searches through the database, finds only 6 matches, all of which are listed as janitors, and so categorizes that picture as a janitor.

    So, it may be “Racist” and “Sexist” that these 6 individuals were apparently working as janitors and not brain surgeons, but the AI/Robot is not “deciding” to place these 6 people into a janitor role, it is simply reporting what it finds in the database.

    To say that the robot demonstrated “toxic stereotypes” is very misleading. Again, the AI/Robot only “demonstrated” “toxic stereotypes” because such stereotypes were present in the database.

    If someone asked you if it is raining outside, and you say that it is, that doesn’t mean that you caused the rain, and it doesn’t mean you are responsible for that person having their planned outdoor picnic ruined.

    1. There is a tale from the early, early days of AI, in which the military trained a neural net computer to distinguish between friendly tanks and enemy tanks. In the lab, it work nearly perfectly, but in the field, it was basically random. It too some effort to determine where they had gone wrong, but they finally figured out that the pictures of friendly tanks they had trained it with were high quality marketing shots from the manufacturer, while the pictures of enemy tanks were grainy, blurry spy shots. Instead of training the computer to distinguish between friendly and enemy tanks, they trained it to distinguish between good photos and bad photos.

      Training AI to do something isn’t nearly as difficult as figuring out *what* you trained it to do.

      And I do not believe that anyone alive will live long enough to see computers that actually *understand* anything at all.

Comments are closed.

Are you looking for a new HR job? Or are you trying to hire a new HR person? Either way, hop on over to Evil HR Jobs, and you'll find what you're looking for.