background

Schedule

The Interpretable ML Symposium is in Hall C.

2:00 - 2:10 Opening remarks
2:10 - 2:40 Invited talk Bernhard Scholkopf (MPI)
The role of causality for interpretability.
video, slides
2:40 - 3:10 Invited talk Kiri Wagstaff (JPL)
Interpretable Discovery in Large Image Data Sets.
video, slides, abstract ▾ Automated detection of new, interesting, unusual, or anomalous items within large data sets has great value for applications from finance (e.g., fraud detection) to science (observations that don’t fit a given theory can lead to new discoveries). In particular, novelty detection in image data sets could help detect new near-Earth asteroids, fresh impact craters on Mars, and other key phenomena that might otherwise be lost within a large archive. Most image data analysis systems are turning to convolutional neural networks (CNN) to represent image content due to their success in achieving high classification accuracy rates. However, CNN representations are notoriously difficult for humans to interpret. In this talk, I will discuss a strategy that combines novelty detection with CNN image features and yields interpretable explanations of novel image content.
3:10 - 3:40 Spotlight talks video, slides
3:40 - 4:15 Coffee Break and posters
4:15 - 4:30 Explainability challenge introduced Organizers of Explainable Machine Learning Challenge
video, slides
4:30 - 5:00 Invited talk Kilian Weinberger (Cornell)
The (hidden) Cost of Calibration.
video, slides, abstract ▾ Arguably the simplest and most common approach towards classifier interpretability is to ensure that a classifier generates well-calibrated probability estimates. In this talk I consider the (hidden) costs associated with such calibration efforts. I argue that on one hand calibration can be achieved with surprisingly simple means, but on the other hand it brings with it potential limitations in terms of fairness understood as error disparity across population groups.
5:00 - 6:00 Panel discussion Hanna Wallach, Kiri Wagstaff, Suchi Saria, Bolei Zhou, and Zack Lipton. Moderated by Rich Caruana.
video
6:00 - 7:00 Break for dinner and posters
7:00 - 7:30 Invited talk Victoria Krakovna (DeepMind)
Interpretability for AI safety
video, slides, abstract ▾ There are many types of interpretability, from identifying influential features and data points to learning disentangled representations. Which of these are the most relevant for building safe AI systems? We will examine how different safety problems benefit from different types of interpretability, and what questions interpretability researchers can focus on to contribute to advancing AI safety.
7:30 - 8:00 Invited talk Jenn Wortman Vaughan (Microsoft)
Manipulating and Measuring Model Interpretability.
video, slides, abstract ▾ Machine learning models are often evaluated in terms of their performance on held-out data. But good performance on held-out data may not be enough to convince users that a model is trustworthy, reliable, or fair in the wild. To address this problem, a new line of research has emerged that focuses on developing interpretable machine learning methods. However, there is still widespread disagreement about what interpretability means and how to quantify and measure the interpretability of a machine learning model. We believe that this confusion arises, in part, because interpretability is not something that can be directly manipulated or measured. Rather, interpretability is a latent property that can be influenced by different manipulable factors (such as the number of features, the complexity of the model, or even the user interface) and that impacts different measurable outcomes (such as an end user's ability to trust or debug the model). As such, we argue that to understand interpretability, it is necessary to directly manipulate and measure the influence that different factors have on real people's abilities to complete tasks. We run a large-scale randomized human-subject experiment, varying factors that are thought to make models more or less interpretable and measuring how these changes impact lay people's decision making. We focus on two factors that are often assumed to influence interpretability, but rarely studied formally: the number of features and whether the model is clear or black-box. We view this experiment as a first step toward a larger agenda aimed at quantifying and measuring the impact of factors that influence interpretability. This talk is based on joint work with Forough Poursabzi-Sangdeh, Dan Goldstein, Jake Hofman, and Hanna Wallach.
8:00 - 8:30 Invited talk Jerry Zhu (UW-Madison)
Debugging the Machine Learning Pipeline.
video, slides, abstract ▾ Suppose you went through the standard machine learning pipeline and trained a model. Alas, your model did not work as expected. Can we partially automate the debugging process to figure out what went wrong? We advocate the following conceptual approach for machine learning debugging: (1) specify the "moving parts" of your pipeline, namely things which can be changed; (2) specify the postconditions of your desired model; (3) minimally wiggle the moving parts and train new models until the postconditions are satisfied. The wiggled moving parts are flagged as potential bugs for you to interpret. Of course, one key for the approach to work is to do (3) efficiently. I will illustrate this approach on bugs in training set labels. This is a challenging combinatorial bilevel optimization problem, but can be relaxed into a continuous optimization problem.
8:30 - 9:30 Panel debate and follow up discussion Yann LeCun, Kilian Weinberger, Patrice Simard, and Rich Caruana.
video

Abstract

Complex machine learning models, such as deep neural networks, have recently achieved outstanding predictive performance in a wide range of applications, including visual object recognition, speech perception, language modeling, and information retrieval. There has since been an explosion of interest in interpreting the representations learned and decisions made by these models, with profound implications for research into explainable ML, causality, safe AI, social science, automatic scientific discovery, human computer interaction (HCI), crowdsourcing, machine teaching, and AI ethics. This symposium is designed to broadly engage the machine learning community on the intersection of these topics — tying together many threads which are deeply related but often considered in isolation.

For example, we may build a complex model to predict levels of crime. Predictions on their own produce insights, but by interpreting the learned structure of the model, we can gain more important new insights into the processes driving crime, enabling us to develop more effective public policy. Moreover, if we learn that the model is making good predictions by discovering how the geometry of clusters of crime events affect future activity, we can use this knowledge to design even more successful predictive models. Similarly, if we wish to make AI systems deployed on self-driving cars safe, straightforward black-box models will not suffice, as we will need methods of understanding their rare but costly mistakes.

The symposium will feature invited talks and two panel discussions. One of the panels will have a moderated debate format where arguments are presented on each side of key topics chosen prior to the symposium, with the opportunity to follow-up each argument with questions. This format will encourage an interactive, lively, and rigorous discussion, working towards the shared goal of making intellectual progress on foundational questions. During the symposium, we will also feature the launch of a new Explainability in Machine Learning Challenge, involving the creation of new benchmarks for motivating the development of interpretable learning algorithms.

Call for Papers

We invite researchers to submit their recent work on interpretable machine learning from a wide range of approaches, including (1) methods that are designed to be more interpretable from the start, such as rule-based methods, (2) methods that produce insight into existing ML models, and (3) perspectives either for or against interpretability in general. Topics of interest include:

  • Deep learning
  • Kernel, tensor, graph, or probabilistic methods
  • Automatic scientific discovery
  • Safe AI and AI Ethics
  • Causality
  • Social Science
  • Human-computer interaction
  • Quantifying or visualizing interpretability
  • Symbolic regression

Authors are welcome to submit 2-4 page extended abstracts, in the Interpretable ML NIPS style (the NIPS style with a different footer). Page counts do not include references or supplementary information, if any. Author names do not need to be anonymized. Accepted papers will have the option of inclusion in the proceedings. Certain papers will also be selected to present spotlight talks. Email submissions to interpretML2017@gmail.com.

Key Dates

Submission Deadline 20 October 2017
Acceptance Notification 23 October 2017
Symposium 7 December 2017

Accepted Papers

There are 35 accepted papers. All are listed below, with 22 included also (at authors' discretion) in the Proceedings of NIPS 2017 Symposium on Interpretable Machine Learning.

All accepted papers, with spotlights highlighted (recorded spotlight talks, slides):

Debate

Time
8:30pm - 9:30pm on Thursday, December 7th (Hall C). Video of debate.

Proposition
Interpretability is necessary in machine learning

Participants
Team A [for the proposition]: Rich Caruana (A1), Patrice Simard (A2)
Team B [against the proposition]: Kilian Weinberger (B1), Yann LeCun (B2)

Format
The debate will start with 5 minute introductory statements. Then each team will take turns asking a member of the other team a question. The total length of time for each question and answer is 4 minutes, so please keep the questions short (maximum 1 minute) and to the point. After those questions, any additional time will feature prepared questions from the moderator.

Person A1 - 5min
Person B1 - 5min
Person A2 - 5min
Person B2 - 5min

(Questions should be short and no longer than 1 minute).
Team A asks B1 or B2 - 4min
Team B asks A1 or A2 - 4min
Team A asks B1 or B2 - 4min
Team B asks A1 or A2 - 4min

Team A asks B1 or B2 - 4min
Team B asks A1 or A2 - 4min
Team A asks B1 or B2 - 4min
Team B asks A1 or A2 - 4min

Moderator questions until time expires (3min each)

Speakers

Confirmed invited speakers and panelists include:

Organizers

Andrew Gordon Wilson Jason Yosinski Patrice Simard Rich Caruana William Herlands
Andrew Gordon Wilson Jason Yosinski Patrice Simard Rich Caruana William Herlands
Cornell Uber AI Labs Microsoft Microsoft CMU

Sponsors

The Interpretable ML Symposium greatfully acknowledges support from the following sponsors:

Microsoft Uber Future of Life Institute