The Academic Fringe Festival - Nithya Sambasivan: The Myopia of Model Centrism
The Academic Fringe Festival - Nithya Sambasivan: The Myopia of Model Centrism 11 April 2022 17:00 till 18:00 - Location: Online by Nithya Sambasivan Abstract AI models seek to intervene in increasingly higher stakes domains, such as cancer detection and microloan allocation. What is the view of the world that guides AI development in high risk areas, and how does this view regard the complexity of the real world? In this talk, I will present results from my multi-year inquiry into how fundamentals of AI systems---data, expertise, and fairness---are viewed in AI development. I pay particular attention to developer practices in AI systems intended for low-resource communities, especially in the Global South, where people are enrolled as labourers or untapped DAUs. Despite the inordinate role played by these fundamentals on model outcomes, data work is under-valued; domain experts are reduced to data-entry operators; and fairness and accountability assumptions do not scale past the West. Instead, model development is glamourised, and model performance is viewed as the indicator of success. The overt emphasis on models, at the cost of ignoring these fundamentals, leads to brittle and reductive interventions that ultimately displace functional and complex real-world systems in low-resource contexts. I put forth practical implications for AI research and practice to shift away from model centrism to enabling human ecosystems; in effect, building safer and more robust systems for all. Speaker Biography Dr. Nithya Sambasivan is a sociotechnical researcher whose work is in solving hard, socially-important design problems impacting marginalised communities in the Global South. Her current research re-imagines AI fundamentals to work for low-resource communities. Dr. Sambasivan's work has been widely covered in venues like VentureBeat, ZDnet, Scroll.in, O’Reilly, New Scientist, State of AI report, HackerNews and more, while influencing public policy like the Indian government’s strategy for responsible AI and motivating the NeurIPS Datasets track. As a former Staff Research Scientist at Google Research, she pioneered several original, award-winning research initiatives such as responsible AI in the Global South, human-data interaction, gender equity online, and next billion users, which fundamentally shaped the company’s strategy for emerging markets, besides landing as new products affecting millions of users including in Google Station, Search, YouTube, Android, Maps & more. Dr. Sambasivan founded and managed a blueprint HCI team in Google Research Bangalore, and set up the Accra HCI team, in contexts with limited existing HCI pipelines. Simultaneously, her research has received several best paper awards at top-tier computing conferences. Homepage: https://nithyasambasivan.com/ . More information In this second edition on the topic of "Responsible Use of Data", we take a multi-disciplinary view and explore further lessons learned from success stories and examples in which the irresponsible use of data can create and foster inequality and inequity, perpetuate bias and prejudice, or produce unlawful or unethical outcomes. Our aim is to discuss and draw certain guidelines to make the use of data a responsible practice. Join us To receive announcements of upcoming presentations and events organized by TAFF and get the Zoom link to join the presentations, join our mailing list . TAFF-WIS Delft Visit the website of The Academic Fringe Festival