A recent study highlights the deep integration of attitudes with language and suggests that AI models reflect these implicit biases, pointing to the need for cultural interventions. Credit: SciTechDaily.com
They found that the associations picked up by large AI language models like ChatGPT match more closely with the second indirect measure rather than the attitudes they explicitly state.“With the rise of AI and large language model applications, we as consumers, leaders, researchers, or policymakers need to understand what these models are representing about the social world,” says lead author Dr. Tessa Charlesworth, of’s Kellogg School of Management.
While emphasizing the correlational nature, the researchers aim to continue exploring sociocultural influences. “Given that we saw some variation in which of the non-English languages showed the correlation, it is important to understand what kind of social and cultural factors could help explain greater transmission between bias and language,” says Dr. Charlesworth.