Austin Clyde is an Assistant Computer Scientist in the Data Science and Learning Division of Argonne National Laboratory, and a Pozen Family Center for Human Rights graduate lecturer, teaching a course on international human rights law and artificial intelligence. He holds a Ph.D. (awaiting conferral in Dec 2022) and an M.Sc. (2019) in Computer Science and an A.B. in Mathematics from the University of Chicago. Before his current appointment, he was a visiting research fellow at the Harvard Kennedy School’s Program on Science, Technology, and Society. His work has been recognized by the ACM Gordon Bell Special Prize for HPC-Based COVID-19 Research (2020) and Department of Energy Secretary’s Honor Award for recognition in his involvement with the National Virtual Biotechnology Laboratory’s COVID-19 Response (2021).
His primary research interests include AI and HPC for science, particularly the application of LLMs and surrogate models for drug discovery and biological deisgn, and science and technology studies (STS) , particularly understanding the relationship between rights, democracy, and AI.
I am currently on the academic job market! Feel free to check out my applicaiton materials below:
Witnessing how practicing interpretive flexibility required a deep understanding of theoretical practices in AI and drug design, my research aims to take this on as method. I am interested in how normative and social analytics such as explainability, calibration, or scrutability intersect with other sociotechnical systems such as law, democratic theory, and human rights. Considering this, I aim to develop novel techniques which advance our understanding of explainability through sample-based explainability, extend capabilities of calibrating models to data, and transform the underlying glue between science and AI research’s goals—large language models and surrogate or edge models.
Furthermore, I believe great effort is needed to advance calibration techniques of LLMs. I envision a radically different AI for science paradigm than current foundational model proposals. Small edge and emended AI systems are increasingly used due to their efficiency, ease of use. Traditional scientists are increasingly drawn to these simple models and using them in their work. These types of models are the most likely models to be deployed to scientific instruments. At the same time, great advances in large foundational language models have drawn renewed excitement around the performance gains in many tasks due to the synergistic injection of more and more diverse data.
While many AI ethics programs focus on explainability and virtues experts should follow in their practice, few research programs treat the idea with STS reflexivity: how do these technologies open new means for citizens to participate in world-making, and how can citizens drive the kinds of technological innovation needed in their local contexts? My research into AI civics is twofold: (1) how do we develop the kinds of public institutions which afford citizens the same access to decision-making that traditional intuitions have in a world of AI? And (2) how do we foster through public education, civic engagement, and university education new skill for an informed citizenry in a technological world. I will articulate AI as an opportunity for empowerment through epistemic justice, where citizens are able to confirm their suspicions and bring new calculability to what oppression is. My work touches human rights law and philosophy, for example, considering the right to the progressive realization of equal access to science and technology.