Austin Clyde is an Assistant Computational Scientist in the Data Science and Learning Division of Argonne National Laboratory. He holds a Ph.D. in computer science from the University of Chicago, where he continues to lecture in the Pozen Family Center for Human Rights, teaching courses on international human rights law and artificial intelligence. He is an expert in applying and developing artificial intelligence techniques to scientific problems, particularly those in high-performance computing, drug design, and large-language models. His current research explores interpretable AI for science systems and the role of interpretation in algorithmic decision systems. The Association for Computing Machinery (ACM) has awarded his work twice with Gordan Bell Special Prize in 2020 and 2022. His involvement with the National Virtual Biotechnology Laboratory's COVID-19 Response received a Department of Energy Secretary's Honor Award. Before his current appointment, he was a visiting research fellow at the Harvard Kennedy School's Program on Science, Technology, and Society.
His primary research interests include AI and HPC for science, particularly the application of LLMs and surrogate models for drug discovery and biological deisgn, and science and technology studies (STS) , particularly understanding the relationship between rights, democracy, and AI.
I am currently on the academic job market! Feel free to check out my applicaiton materials below:
Biography » CV » Research statement » Teaching statement » DEI statement »
Witnessing how practicing interpretive flexibility required a deep understanding of theoretical practices in AI and drug design, my research aims to take this on as method. I am interested in how normative and social analytics such as explainability, calibration, or scrutability intersect with other sociotechnical systems such as law, democratic theory, and human rights. Considering this, I aim to develop novel techniques which advance our understanding of explainability through sample-based explainability, extend capabilities of calibrating models to data, and transform the underlying glue between science and AI research’s goals—large language models and surrogate or edge models.
Furthermore, I believe great effort is needed to advance calibration techniques of LLMs. I envision a radically different AI for science paradigm than current foundational model proposals. Small edge and emended AI systems are increasingly used due to their efficiency, ease of use. Traditional scientists are increasingly drawn to these simple models and using them in their work. These types of models are the most likely models to be deployed to scientific instruments. At the same time, great advances in large foundational language models have drawn renewed excitement around the performance gains in many tasks due to the synergistic injection of more and more diverse data.
While many AI ethics programs focus on explainability and virtues experts should follow in their practice, few research programs treat the idea with STS reflexivity: how do these technologies open new means for citizens to participate in world-making, and how can citizens drive the kinds of technological innovation needed in their local contexts? My research into AI civics is twofold: (1) how do we develop the kinds of public institutions which afford citizens the same access to decision-making that traditional intuitions have in a world of AI? And (2) how do we foster through public education, civic engagement, and university education new skill for an informed citizenry in a technological world. I will articulate AI as an opportunity for empowerment through epistemic justice, where citizens are able to confirm their suspicions and bring new calculability to what oppression is. My work touches human rights law and philosophy, for example, considering the right to the progressive realization of equal access to science and technology.
(In progress)