So, this is an example of hypothetical sense of data of a production machine. Each row contains information about a production of a particular part, it's an associated part number. So, our goal is to predict asperity which is highly correlated with the effect if the part is healthy or faulty based under observed sense of edges. So, which column can be used to create a simple rule for this? First, we have to observe that all three columns are aggregations of some sort of sense of value, so we're losing information, but this is fine for this example. We noticed that the vibration aggregation gives us the best indication for our rule, so let's code it. Let's start with creation of a data point. The first field is part number, so let's take 100. To next field is maxtemp which we will set to 35, the mintemp is 35 as well. Finally, set the maxvibration to 12. After producing the part, asperity is measured. So, we set it to 0.32. So, let's copy this four times to reflect our four measurements. As in the example of the slides, I update those values now accordingly. Let's implement our simple rule to predict asperity based on a sense of value. Obviously, maxvibration has the largest impact on the outcome. Therefore, if maxvibration is greater than 100, we return to 13, or 0.33 otherwise. If we now test this function, we get quite good results. So, now let's see if we can do better without hard coding a rule. This formula is called a linear regression model. It's called regression because it predicts a continuous value based on observations X, and weights W. Let's create our first machine learning algorithm called linear regression in Python. So, remember that we have to create a linear combination between our input fields and some parameters, W. If you plot this, W1 is the offset of the line. So, if you run this now, you obviously run into an error because we haven't defined W yet. So, let's do this now. Now, let's choose the parameters W in a way that it somehow resembles the rule which we have created before. If we set everything to zero, we get zero as a result. So, let's try to adjust those values which we depict, we just take the numbers of our dataset and play around until we get a better result. As we can see, we're creating relatively fields here. In a real world scenario of course, those parameters W will be set by an optimizer which is part of the machine learning training. Maybe you've noticed that there is one X missing in order to get equal size vector. Therefore, we define X_0 as one. This is the bias term or the offset of the linear regression. So now, we can multiply X with W because both vectors have the same name. If check right down what we have learned before, you will come up with the following. If it doesn't look this like linear regression, just put Y at the end and you can see it. So, that's cool we can express linear regression with a single vector, vector multiplication.