Researchers show that computer vision algorithms pretrained on ImageNet exhibit multiple, distressing biases

State-of-the-art image-classifying AI models trained on ImageNet, a popular (but problematic) dataset containing photos scraped from the internet, automatically learn humanlike biases about race, gender, weight, and more. That’s according to new research from scientists at Carnegie Mellon University and George Washington University, who developed what they claim is a novel method for quantifying biased associations between representations of social concepts (e.g., race and gender) and attributes in images. When compared with statistical patterns in online image datasets, the findings suggest models automatically learn bias from the way people are stereotypically portrayed on the web.

Companies and researchers regularly use machine learning models trained on massive internet image datasets. To reduce costs, many employ state-of-the-art models pretrained on large corpora to help achieve other goals, a powerful approach called transfer learning. A growing number of computer vision methods are unsupervised, meaning they leverage no labels during training; with fine-tuning, practitioners

Read More