The Dark Side of Dataset Scaling: Evaluating Racial Classification in Multimodal Models

Abeba Birhane, Sepehr Dehdashtian, Vinay Uday Prabhu and Vishnu Boddeti
ACM Conference on Fairness, Accountability, and Transparency 2024 .

Abstract

‘Scale the model, scale the data, scale the GPU farms’ is the reigning sentiment in the world of generative AI today. While model scaling has been extensively studied, data scaling and its downstream impacts on model performance remain under-explored. This is particularly important in the context of multimodal datasets whose main source is the World Wide Web, condensed and packaged as the CommonCrawl dump, which is known to exhibit numerous drawbacks. In this paper, we evaluate the downstream impact of dataset scaling on 14 visio-linguistic models trained on the LAION400-M and LAION-2B datasets by measuring racial and gender bias using the Chicago Face Dataset (CFD) as the probe. Our results show that as the scale of training data increased, the probability of a pre-trained CLIP model misclassifying human images as non-human offensive classes such as chimpanzee, gorilla, and orangutan decreased, but misclassifying same images as human offensive classes such as criminal increased. Furthermore, of the 14 Vision Transformer-based visio-linguistic models we evaluated, the probability of predicting a Black male and Latino male as criminal increases by 65% and 69%, respectively, when the dataset is scaled from LAION-400M to LAION-2B for the larger ViT-L models. But for the smaller base ViT-B models, the probability of predicting a Black male and Latino male as criminal decreases by 20% and 47%, respectively, when the dataset is scaled from LAION-400M to LAION-2B. We present a qualitative and historical analysis of the model audit results, reflect on our findings and their implications for dataset curation practice, and close with a summary of mitigation mechanisms and ways forward.