你是否好奇数据可以告诉你什么？你是否想在关于机器学习促进商业的核心方式上有深层次的理解？你是否想能同专家们讨论关于回归，分类，深度学习以及推荐系统的一切？在这门课上，你将会通过一系列实际案例学习来获取实践经历。在这门课结束的时候，

Loading...

你是否好奇数据可以告诉你什么？你是否想在关于机器学习促进商业的核心方式上有深层次的理解？你是否想能同专家们讨论关于回归，分类，深度学习以及推荐系统的一切？在这门课上，你将会通过一系列实际案例学习来获取实践经历。在这门课结束的时候，

Python Programming, Machine Learning Concepts, Machine Learning, Deep Learning

4.6（8,591 個評分）

- 5 stars6,257 ratings
- 4 stars1,840 ratings
- 3 stars322 ratings
- 2 stars84 ratings
- 1 star88 ratings

Sep 28, 2015

Excellent course, with really good lectures, material and assignment. Plus the professors are really amazing and their enthusiasm is really refreshing and makes the class more interesting. Loved it!

Jun 05, 2017

This course is very helpful for people who are novice in machine learning. The course uses Graphlab Create which is different from scikit or R-libraries, but the tool(Graphlab) is excellent to use.

從本節課中

Welcome

Machine learning is everywhere, but is often operating behind the scenes. <p>This introduction to the specialization provides you with insights into the power of machine learning, and the multitude of intelligent applications you personally will be able to develop and deploy upon completion.</p>We also discuss who we are, how we got here, and our view of the future of intelligent applications.

#### Carlos Guestrin

Amazon Professor of Machine Learning#### Emily Fox

Amazon Professor of Machine Learning

[MUSIC]

As Emily discussed,

we're gonna see machine learning through the lens of a wide range of case studies

in different areas that really ground the concepts behind them.

So, other machine learning classes that you might take out there

are really a laundry list of algorithms and methods.

So things like support vector machines and

kernels and logistic ration and networks and so on.

And they're just like, a laundry list of methods.

And the problem with that approach is that since you start from the algorithms you

end up with really simplistic use cases with the applications,

they're really disconnected from reality.

So, we're doing things very different in this specialization, and

we've done this for quite a while here.

Emily and I created a course at the University of Washington

on machine learning at scale for big data.

We pioneered this use case approach for teaching machine learning.

And in that course, we saw a lot of positive feedback

from folks really understanding rounding the concepts.

So we're going to start from the use cases in the first course.

And by starting from use cases, you're really going to be able to grasp the key

concepts and the techniques that allow you to build, measure the quality and

understand whether your intelligent applications is working well or not.

And in the end, you are going to build a bunch of these intelligent applications.

So to build such intelligent applications,

you typically have to think about what task am I going to do.

I am going to solve a sentiment analysis problem and what models, what

machine learning models am I going to use, and things like support vector machines or

regression what methods when they use to optimize the parameters of that model?

And then I ask a question like is this really providing the intelligence that I'm

hoping for?

How do we measure the quality of that system?

So in this specialization what we're gonna do is defer the core

pieces of how to describe a model and optimize it to the follow on courses.

And this first course is going to be focused on helping us figure out what

task we're trying to solve, what machine learning methods make sense, and

how to measure them.

And with that, using the algorithms as black boxes, we're going to be able to

build a wide range of really intelligent cool applications together.

And we'll actually code them and

build them and demonstrate them in a wide range of ways.

Now the following on courses, they're,

it's going to be four of those plus a capstone.

They really go into depth in different areas.

So let me give you a few quick examples of the kind of depth we're going to see

throughout this specialization.

So the regression course is going to talk about various models of predicting

a real value, so for example, a house price from the features of the house.

And we're going to discuss linear regression techniques,

we're going to discuss advanced techniques like ridge regression and lasso that allow

you to select what features are most appropriate for your problem.

We're going to talk about optimization techniques like gradient descent and

coordinate descent to optimize the parameters of those models.

As well as some key machine learning concepts like loss functions,

bias-variance tradeoffs, cross-validation.

Things that you need to know to really take this method and kind of improve them,

develop them and build applications with them.

The second course on classification, we're gonna build, for example,

the sentiment analysis use case that Emily talked about, and

talk about more of those classifications.

From linear classifiers to more advanced things like linear regression,

sorry, logistic regression, support vector machines.

But then add kernels and

decision trees which allow you to deal with non-linear complex features.

We talked about optimization methods for dealing with these techniques at scale and

for building ensembles of them something called boosting.

And then the underlying concepts in machine learning that really help

you grasp classifier and scaled it up and apply it to different methods.

Now, in the next course, we're gonna focus on clustering and

retrieval, especially in the context of documents.

So we're gonna talk about basic techniques like nearest neighbors

as well as more advanced clustering techniques, mixture of Gaussians, and even

latent Dirichlet allocation can advance text analysis clustering technique.

We're gonna talk about the algorithms that underpin these things and

how to scale them up with techniques like KD-trees and

sampling and expectation maximization.

Now the core concepts here are really around how to scale these things up,

how to measure the quality and really how to write them

as distributed algorithms using techniques like map-reduce,

which are implementing systems like Hadoop that you might have learned about.

So in the fourth course, you're actually going to write some map-reduce code for

distributed machine learning.

Now in the final technical course we're gonna focus on techniques of matrix

factorization and dimensionality reduction, which are widely applicable,

but in particular for recommender systems, for recommending products.

So these are things like collaborative filtering,

matrix factorization, PCA, and the underlying techniques for

optimizing them, like coordinate descent, Eigen decomposition, SVD.

And then, a wide variety of whole machine learning concepts that are really useful.

Especially in the recommenders' domain.

Like how to pick a diverse set of recommendations and

how to scale them up to large problems.

Now the capstone is going to be really exciting and towards the end of

this module, I'm going to go back and tell you quite a bit more about the capstone.

But just to give you a little hint, you're going to build

something extremely cool that you can show to all your friends, potential employers.

And you'll see that it can build a really smart intelligent

application around recommenders, the combined text data,

image data, sentiment analysis, deep learning, it's going to be really cool.

[MUSIC]