that's a real number. And the second term over here, xi,

this term over there is a vector, right? because xi, may be a vector that would be

say xi0, xi1, xi2, right?

And, what is the summation? Well, what the summation is saying is

that this term that is this term over here,

this is equal to h of x1 - y1 * x1 + h of x2 - y2 * x2 plus,

you know, and so on. Okay?

Because this is a summation over i, so as i ranges from i = 1 through m,

you get these different terms, and you're summing up these terms here.

And the meaning of each of these terms, you know,

this is a lot like if you remember, actually from the, from

the earlier quiz in this, right? You, you saw this equation.

we said that, in order to vectorize this code,

we would, instead, set u = 2v + 5w. So, we're saying that the vector u is

equal to two times the vector v plus five times the vector w.

So, this is an example of how to add different vectors.

And this summation is the same thing. This is saying that the summation over

here is just some row number, right? That's kind of like the number two, or

some other number times the vector x1. This is kind of like, you know,

two times v, and saying with some other number times x1.

And then, plus, you know, instead of five times w, we instead have

some other row number plus some other vector.

And then, you add on other vectors, you know,

plus dot, dot, dot, dot, dot plus the other vectors,

which is y overall. This thing over here, that whole

quantity, that delta is just some vector. And concretely, the three elements of

delta correspond if n = 2, the three elements of delta correspond exactly to

this thing, to the second thing, and this third thing.

Which is why when you update theta, according to theta minus alpha delta we

end up carrying exactly the same simultaneous updates as, as the update

rules that we have on top. So, I know that there was a lot that

happened on this slide, but again, feel free to pause the video and, and I'd

encourage you to, so the step through the difference, if, if you aren't sure what

just happened, I'd encourage you to step through the slide to make sure you

understand why is it that this update here, with this definition fo delta,

right? Why is it that that's equal to this

update on top. and if it's still not clear one insight

is that, you know, this thing over here? That's exactly the vector x.

And so, we're just taking, you know, all three of these computations and

compressing them into one step which is vector delta which is why we can come up

with a vectorized implementation of this of this step of linear regression this

way. So,

I hope this step makes sense and do, do look at the video and make sure, and see

if you can understand it. in case you don't understand quite the

equivalents of this math, if you implement this, this turns out to be the

right answer anyway. So even, even if you didn't quite

understand the equivalents if you just implement it this way, you, you, you'll

be able to get linear regression to work. But, if you're able to figure out why

these two steps are equivalent, then hopefully, that will give you a better

understanding of vectorization as well. And finally,

if you implementing linear regression using more than one or two features.

So, sometimes we use linear regression with tens or hundreds or thousands of

features. But if you use the vectorized

implementation of linear regression, your data will run much faster than if you had

say, your old for loop that was, you know, updating theta zero then theta one,

then theta two yourself. So, using a vectorizing implementation,

you should be able to get a much more efficient implementation of linear

regression. And when you vectorize later algorithms

that we'll see in this course is a good trick whether in octave or some other

language, like C++ or Java, for getting your code

to run more efficiently.