If interpretability is the answer, what is the question?

Introduction The goal of Interpretable Machine Learning (IML) is to provide human-intelligible descriptions of potentially complex ML models. However, IML descriptors themselves can be difficult to interpret: To reduce complexity, each method is limited to provide insight into a specific aspect of model and data. If the method’s focus is not clearly understood, IML descriptors can be misinterpreted. We illustrate the problem on a simple example: Suppose a practitioner aims to learn about the relevance of in-store temperature for sales at a petrol station....

<span title='2022-04-16 00:30:03 +0000 +0000'>April 16, 2022</span>&nbsp;·&nbsp;9 min&nbsp;·&nbsp;Gunnar König