Congratulations. You've made it to the end of the Launching into Machine Learning course. Let's recap what we have learned so far. You've learned about the attributes related to data quality like accuracy of data, consistency of data, timeliness of data, and completeness of data. You learned that Exploratory Data Analysis, or EDA, is an approach for Data Analysis that employs a variety of techniques, mostly graphical, to maximize insight into a dataset, uncover underlying structure, extract important variables, detect outliers and anomalies, test underlying assumptions, develop parsimonious models, and determine optimal factor settings. Training and evaluating an ML model is an experiment with finding the right generalizable model that fits your training dataset, but doesn't memorize it. As you see here, we have a simplistic linear model that does not fit the relationships in the data. You'll be able to see how bad this is immediately by looking at your loss metric during training, and visually on this graph here as there are quite a few points outside the shape of the trend line. This is called underfitting. On the opposite end of the spectrum is overfitting, as shown on the right. Here, we greatly increased the complexity of our linear model, which seems to model the trading dataset really well, almost too well. This is where the evaluation dataset comes in. You can use the valuation dataset to determine if the model parameters are leading to overfitting. Overfitting, or memorizing your training dataset can be far worse than having a model that only adequately fits your data. Somewhere in between an underfit where the loss metric is not low enough, and an overfit where the model doesn't generalize, is the right model fit. To make our models generalize well and not simply memorize a training dataset. We split our original dataset into training, evaluation, and testing, and only show them to the model at predefined milestones. In our labs, we discovered that ML models can make incorrect predictions for a number of reasons. Poor representation of all use cases, overfitting, and underfitting. We also learned that we can measure the quality of our model by examining the predictions it made. Congratulations on completing our Launching into Machine Learning course. We hope you have found value in our course content, labs, readings, discussions, and quizzes.