If interpretability is the answer, what is the question?
Introduction The goal of Interpretable Machine Learning (IML) is to provide human-intelligible descriptions of potentially complex ML models. However, IML descriptors themselves can be difficult to interpret: To reduce complexity, each method is limited to provide insight into a specific aspect of model and data. If the method’s focus is not clearly understood, IML descriptors can be misinterpreted. We illustrate the problem on a simple example: Suppose a practitioner aims to learn about the relevance of in-store temperature for sales at a petrol station....