On Tuesday 2/12, Samuel Carton, a PhD Candidate from the School of Information will present his work on “The Design and Evaluation of Algorithms for Explaining Text Classifiers”
Please help forward this announcement to anyone who might be interested! Light lunch will be provided.
Please RSVP by 12 PM on 2/10 if you will be there.
The machine learning community has recently begun to recognize the need for interpretable predictive models. While such models can be trained to be very accurate, sometimes even more accurate than their human counterparts on average, they have a tendency to fail unexpectedly and are ill-equipped to deal with nuance and outliers. One of the biggest challenges in this area is in defining what it means for an explanation to be effective in the first place, and then in designing algorithms optimized for this quality. In this talk I discuss two papers: the first is an algorithm for explaining text classifier decisions by producing high-recall attention masks, and the second is a crowdsourced experiment exploring the impact of this type of explanation on human performance in a model-assisted decision task.
Sam is a PhD candidate in the school of information, advised by Paul Resnick and Qiaozhu Mei. He received a BS in computer science from Northwestern University. Sam's current research interests are in explainable machine learning, where he is interested both in engineering new explanation methods as well as understanding the human factors that determine what explanations are effective in real-world settings. His past work includes projects on tracking and visualizing the spread of rumors over social media as well as predictive modeling of police misconduct. His professional experience includes an internship with Microsoft Research as well as the Data Science for Social Good Fellowship at the University of Chicago.