How AI Reinforces Caste: Creating and Amplifying Systematic Inequality
Catherine Powell
In arguing that artificial intelligence (“AI”) creates, reinscribes, and amplifies caste, this Article asserts that algorithmic discrimination is not just a bug, it is an essential feature of the system that powers the digital economy. Because this economy traffics in our information and is monetized through targeted advertising, it depends on knowing our race, gender, and other protected characteristics (i.e., our caste). Given that the digital economy is inherently an information economy, it is driven by population-wide demographics and predictions, which algorithms can process and commodify at scale. As such, digital surveillance can lead not only to individual harm, but structural, systemic harms when protected characteristics provide a basis for algorithmic bias within this economy. Yet, existing law embeds two interrelated challenges, which this Article fills a niche in pulling together. First, equality law faces the ongoing retrenchment of civil rights law, given the Supreme Court’s increasingly narrow focus on individualized harm (i.e., the formal equality, “color blind” paradigm), rather than on substantive discrimination. Second, technology and data governance law similarly foreground individual rights over relational, social understandings. This creates a mismatch between, on the one hand, equality law’s individualized harm approach to caste, and, on the other hand, data as “quasi capital” built on datafied information gleaned through inherently relational, identitybased, society-wide characteristics. Just as data governance and tech regulation should recognize relational accounts of online harm, so too must equality law adjust to recognize the ways technology amplifies the volume and velocity of structural, systemic inequality. Caste is a useful notion in that it connects individual status with group status in a caste system—a socially constructed system of dominance and subordination.