案例学习：预测房价

Loading...

來自 University of Washington 的課程

机器学习：回归

4013 個評分

At Coursera, you will find the best lectures in the world. Here are some of our personalized recommendations for you

案例学习：预测房价

從本節課中

Simple Linear Regression

Our course starts from the most basic regression model: Just fitting a line to data. This simple model for forming predictions from a single, univariate feature of the data is appropriately called "simple linear regression".<p> In this module, we describe the high-level regression task and then specialize these concepts to the simple linear regression case. You will learn how to formulate a simple regression model and fit the model to data using both a closed-form solution as well as an iterative optimization algorithm called gradient descent. Based on this fitted function, you will interpret the estimated model parameters and form predictions. You will also analyze the sensitivity of your fit to outlying observations.<p> You will examine all of these concepts in the context of a case study of predicting house prices from the square feet of the house.

- Emily FoxAmazon Professor of Machine Learning

Statistics - Carlos GuestrinAmazon Professor of Machine Learning

Computer Science and Engineering

[MUSIC]

Okay, so let's take a moment to compare the two approaches that we've gone over,

either setting the gradient equal to zero or doing gradient descent.

Well, in the case of minimizing residual sum of squares,

we showed that both were fairly straight forward to do.

But in a lot of the machine learning method's that we're interested in

taking the gradient and setting it equal to zero,

well there's just no close form solution to that problem.

So, often we have to turn to method's like gradient descent.

And likewise, as we're gonna see in the next module, where we turn to having lots

of different inputs, lots of different features in our regression.

Even though there might be a close form solution to setting the gradient equal to

zero, sometimes in practice it can be much more efficient

computationally to implement the gradient descent approach.

And finally one thing that I should mention about the gradient descent

approach is the fact that in that case, we had to choose a stepsize and

a convergence criteria.

Whereas, of course, if we take the gradient and

are able to set it to zero, we don't have to make any of those choices.

So that is a downside to the gradient descent approach,

is having to specify these parameters of the algorithm.

But sometimes we're relying on these types of optimization algorithms to

solve our optimization objective.

[MUSIC]