Abstract
A myriad of approaches exists to help us peer inside automated decision-making systems based on artificial intelligence and machine learning algorithms. These tools and their insights, however, are socio-technological constructs themselves, hence subject to human biases and preferences as well as technical limitations. Under these conditions, how can we ensure that explanations are meaningful and fulfil their role by leading to understanding? In this talk I will demonstrate how different configurations of an explainability algorithm may impact the resulting insights and show the importance of the strategy employed to present them to the user, arguing in favour of a clear separation between the technical and social aspects of such tools.