Robots became racist after AI training, always chose Black faces as ‘criminals’

White male cyborg thinking and touching his headon dark blue background 3D rendering/Photo: Adobe Stock

White male cyborg thinking and touching his headon dark blue background 3D rendering/Photo: Adobe Stock

The results of a new study are thought to be the first to demonstrate that robots programmed with an artificial intelligence (AI) algorithm exhibited gender and racial biases  — favoring men over women, white people over people of color and making assumptions about people’s jobs based solely on their appearance, according to reports by Hub, the news center of Johns Hopkins University, and The Washington Post.

Researchers from Johns Hopkins University, Georgia Institute of Technology and other institutions trained robots using the CLIP AI model before asking them to scan blocks with faces on them and classify the individuals into boxes based on 62 orders.

The study found that when the specially designed robots were entrusted with placing the “criminal” into a box, they kept selecting the block with a Black man’s face.

According to Hub, the study also found that once the robot “sees” people’s faces, it tends to identify women as “homemakers” over white males, identify Black men as “criminals,” 10% more often than white men, and identify Latino men as “janitors” 10% more frequently than white men.

A recent study has found that robots designed with artificial intelligence algorithms displayed racist and sexist behavior. (Photo: Adobe Stock)

“The robot has learned toxic stereotypes through these flawed neural network models,” said author Andrew Hundt, a postdoctoral fellow at Georgia Tech who co-conducted the work as a PhD student working in Johns Hopkins’ Computational Interaction and Robotics Laboratory. He is quoted in the Hub. “We’re at risk of creating a generation of racist and sexist robots, but people and organizations have decided it’s OK to create these products without addressing the issues.”

The Post reported that researchers have gathered a number of examples of biased AI algorithms in recent years. For instance, the difficulty that facial recognition systems have in identifying people of color, and the tendency of crime prediction algorithms to unfairly target Black and Latino people.

Heretofore, robots have escaped this scrutiny, but if they are programmed with similar technology they can exhibit these biases, which may not be immediately apparent, Abeba Birhane, a senior fellow at the Mozilla Foundation, told The Post.

“When it comes to robotic systems, they have the potential to pass as objective or neutral objects compared to algorithmic systems,” said Birhane, who studies racial stereotypes in language models. “That means the damage they’re doing can go unnoticed, for a long time to come.”

Meanwhile, the scientists believe methodical adjustments to research and business procedures are required to stop future machines from acquiring and acting out these human preconceptions, Hub reported.

“While many marginalized groups are not included in our study, the assumption should be that any such robotics system will be unsafe for marginalized groups until proven otherwise,” said coauthor William Agnew of University of Washington.

TheGrio is FREE on your TV via Apple TV, Amazon Fire, Roku, and Android TV. Please download theGrio mobile apps today!

Exit mobile version