Nowadays financial institutions such as banks and insurance companies rely more and more on complex machine learning models to drive their business. Such models can handle huge amounts of data and make highly accurate predictions. Their flexibility allows to capture complex interaction patterns. The price to pay when using such models is the lack of transparency on how they assign their predictions. The impact of a given input feature on a given prediction is not straightforward as in traditional statistical models (for example Generalized Linear Models). Not understanding the way the predictions are made by a model induces a lack of trust in those predictions. This lack of trust is a real issue when those models are used to drive financial institutions’ businesses. It is therefore critical to be able to whitewash those black box models in order to gain visibility on the link between input features and predictions. This paper presents different techniques of machine learning interpretability which allow to shed light on the model’s predictions either globally (over the whole dataset) or locally (for a specific segment of the dataset or even at an individual level).