In this video we are going through the main steps of the practical exercises, which exploit line to obtain local explanation for ECG classification. A good model explanation varies with the type of data. For text for example, it might represent the presence or absence of words. For an image, it might represent the presence or absence of purchase of similar pictures. For tabular data it could be a weighted combination of columns. Lime, as a model agnostic approach, applies flexibly in a number of different models. Nevertheless, we still need to package the data appropriately with respect to our application. Lime offers several options on how to package your data depending of what type of data you're using. So it cause the lime tabular explainer, for tabular data, the lime text explainer for text data, the lime image explainer for images and the recurrent tabular explainer, which can be used with recursive neural networks. We should also point out that the lime tabular explained there, that gets tabular data that can take categorical or numerical values. Numerical values are perturbed by sampling from a normal distribution based on the mean and standard deviation of the training data. Similarly for categorical features, it produces a data according to the training distribution. On the other hand, the lime text explainer uses features that are individual words and the variations of the observation is created by randomly removing words from the original text. The lime image explainer accepts an image as a 3 dimensional array which corresponds to RGB and coding. For our ECG classifier both for the CNN and LSDM, we adopted the recurrent tabular explainer. This is because, it packages the input data with a shape of a number of samples, by number of time steps, by number of features. Remember that both a convolutional and LSDM architecture requires either convolutional or LSDM layers of three dimensions. And it would like to use lime without changing the underlying architecture of our model. Here we package the ACT based on its time resolution to number of pits. And remember that we have here, 275 times steps. So, in order to estimate the important space segment, we have to average the lime weights within each segments to get the explanation they load for our ECG classification. Our relay base which corresponds to different pathologies reflected in our ECG beats are encoded with one-hot encoded variables as you see here. We have adapted line on a CNN architecture, we have already examined in previous videos. Here we see the lime explanation for a particular ECG beat. On the left, we see the output of lime, which is a set of lime weights with relation to each of the segment. We see that we can have both positive and negative contributions. In order to understand better the lime weights, we super imposed those with ECG instance, and we see that the QRS complex get the highest significance. However, they don't also other bits of the ECG highlighted as well. Here we see the lime explanation for another ECG instance. We see that this is very different than what we've seen before. Lime has been criticized before because of its robustness. Robustness refers to the fact, that with small modifications of the ECG, you get a much larger picture difference in the explanation. And here we see, this negative contribution is also important with relation to that decision of the model we examine. Lime is a model agnostic approach. And for this reason, it's easy to apply in different neural network architectures without having to modify the model. So here we see that we choose an LSTM based architecture. We examine again. Two different instances in order to understand better the robustness of the method. So from the left to see again the lime weights and we see the slices or there according to which one contribute mostly to the decision making. And on the right, we see the same information, but superimposed with an ECG signal in order to provide a more intuitive and easily to grasp idea of the explanation. Here we see a local explanation of another ECG instance based on lime. Again, we notice that we have a relatively large difference in our explanation with relation to our previous beat, one of the difficulties in explainability models is that we don't have ground truth data to verify their accuracy. Finally, we used lime to derive explanations for a multi layer perceptron. In this case, again we superimpose the lime weights to the ECG signal we examine. And we see again that the QRS complex get significant weights, which is similar to most of the other beats we examined before. Here, we see the application of lime on another ECG instance. We have here a local explanation, which has similar negative contributions. This is different than what we've seen in the previous example. However, remember that negative contributions also revealed significance with relation to the MLP model. Summarizing, we observed that LIME explanations agreed that the area around the QRS complex contribute significantly to the prediction across all models. We also saw that LIME local explanations may disagree with each other, for similar instances of the ECG signal. LIME also provided negative contributions, which there are also indicate significant importance.