“For AI technologies to overcome this it’s not enough to just plaster over the diversity cracks,” says Flick. “The whole system, from data to engineers to c-suite, needs to be diversified and focused on ensuring harmful biases are not perpetuated by this technology.”
Flick has a simple message that gets to the core of the problem. “The usual focus on the engineering should be the lowest priority,” she says. “If the data going in isn’t good enough, it shouldn’t even get to the engineering stage.”ran a project that analysed the output of text-to-image generators called Stable Bias. “The models will assume that whatever thing that you want to generate, it’ll be male,” says Luccioni. “An astronaut is going to be a white man, unless you say, ‘black female astronaut’.”
It’s not enough to assume that things will sort themselves out, warns Luccioni. “Given the fact that AI models are a reflection of our values, our society, and our choices, it’s really important to have more diversity in the people making these choices,” she says. It’s a mistake to believe that these are simple mathematical models that are free from any risk of human bias. “Actually, they reflect the choices we make as people training and using AI models.
“It’s really important to improve the gender gap,” she says, “and to contribute to improving AI models’ performance on non-males.”