Chevron Left
返回到 Machine Learning: Clustering & Retrieval

學生對 华盛顿大学 提供的 Machine Learning: Clustering & Retrieval 的評價和反饋

4.6
1,801 個評分
307 個審閱

課程概述

Case Studies: Finding Similar Documents A reader is interested in a specific news article and you want to find similar articles to recommend. What is the right notion of similarity? Moreover, what if there are millions of other documents? Each time you want to a retrieve a new document, do you need to search through all other documents? How do you group similar documents together? How do you discover new, emerging topics that the documents cover? In this third case study, finding similar documents, you will examine similarity-based algorithms for retrieval. In this course, you will also examine structured representations for describing the documents in the corpus, including clustering and mixed membership models, such as latent Dirichlet allocation (LDA). You will implement expectation maximization (EM) to learn the document clusterings, and see how to scale the methods using MapReduce. Learning Outcomes: By the end of this course, you will be able to: -Create a document retrieval system using k-nearest neighbors. -Identify various similarity metrics for text data. -Reduce computations in k-nearest neighbor search by using KD-trees. -Produce approximate nearest neighbors using locality sensitive hashing. -Compare and contrast supervised and unsupervised learning tasks. -Cluster documents by topic using k-means. -Describe how to parallelize k-means using MapReduce. -Examine probabilistic clustering approaches using mixtures models. -Fit a mixture of Gaussian model using expectation maximization (EM). -Perform mixed membership modeling using latent Dirichlet allocation (LDA). -Describe the steps of a Gibbs sampler and how to use its output to draw inferences. -Compare and contrast initialization techniques for non-convex optimization objectives. -Implement these techniques in Python....

熱門審閱

BK

Aug 25, 2016

excellent material! It would be nice, however, to mention some reading material, books or articles, for those interested in the details and the theories behind the concepts presented in the course.

JM

Jan 17, 2017

Excellent course, well thought out lectures and problem sets. The programming assignments offer an appropriate amount of guidance that allows the students to work through the material on their own.

篩選依據:

201 - Machine Learning: Clustering & Retrieval 的 225 個評論(共 295 個)

創建者 Prabhu

Nov 02, 2019

Very clear explanation of concepts with a good selection of examples.

創建者 Mark h

Aug 08, 2017

Very helpful

創建者 Kim K L

Oct 04, 2016

Another super course. Though admittedly (for me at least) very difficult to make within the allotted time given for one period of the Course. Lots of advanced stuff that require substantial studies to really comprehend, i.e., it should never be enough just to hack & run the code (that's the easier challenge). Still have a long washing list of topics coming out of this Course that I need (want) to understand better. But at least the background to do so is neatly provided here. So without further ado ... Applause!

創建者 Ce J

Jun 26, 2017

well organized and easy to understand

創建者 Freeze F

Oct 26, 2016

From LDA onwards the pace ramped up ! Please be slow during advance topics. But altogether it was a great course.

創建者 Alexandre

Oct 23, 2016

ok

創建者 Sathiraju E

Mar 03, 2019

Very nice course. Things are well explained, however some concepts could be expanded more.

創建者 Renato R R

Jan 05, 2018

This course is amazing. I could really work on real world problems. It is a pity that we are not going to have the following courses:

Recommender Systems & Dimensionality Reduction

Machine Learning Capstone: An Intelligent Application with Deep Learning

Thank you Emily and Carlos.

創建者 Prasant K S

Dec 21, 2016

It is explained in simple and lucid language by expert Emily and codes illustrated by Carlos. Go for it.

創建者 Nitish V

Oct 29, 2017

The Course is good . Covered lots of topics .

創建者 Job W

Jul 23, 2016

Great!

創建者 Yi W

Sep 28, 2016

As someone very keen on math, more math background as optimal video would be more helpful.

創建者 Tripat S

Aug 07, 2016

This is the best course in ML - would recommend it ...the sequence of the courses is the best...the specialization in this ML is a career boost

創建者 Alessio D M

Aug 01, 2016

Very nice course, and a great grasp on clustering techniques. If I could just suggest something to improve, it would be the section on LDA and Gibbs: it's very high level and it would be really nice to have some more technical insights on those techniques (perhaps with optional sections, as for other topics).

創建者 Edwin P

Feb 15, 2019

Excellent, good contribution to the technical and practical knowledge ML

創建者 Shuang D

Jun 29, 2018

advanced knowledge on ML, great course

創建者 Rahul G

Jun 13, 2017

Good course but Week 5 LDA needs improvement.

創建者 Velpula M K

Dec 06, 2019

Good and best to learn.

創建者 Karundeep Y

Sep 18, 2016

Best Course.

創建者 yoon s w

Jul 26, 2018

good to learn what is clustering and retrieval

創建者 Usman

Nov 28, 2016

This was another great course. I hope that the instructors indulge in a little bit more theory. Anyway it was a magnificent course. Hope the coming courses are as good as this one.

創建者 Martin B

Apr 11, 2019

Greatly enjoyed it. As with the other courses in this specialization the discussion of the subjects is impeccable, especially if you've taken some preparatory mathematics courses. The reliance on Graphlab Create is a drag though.

創建者 Big O

Dec 21, 2018

More detail on theory behind LDA and HMMs would have been useful. Otherwise, another brilliant course!

創建者 Srinivas C

Jan 07, 2019

This was a really good course, It made me familiar with many tools and techniques used in ML. With this in hand I will be able to go out there and explore and understand things much better.

創建者 Abhishek S

Feb 10, 2018

Till Expectation Maximization, the learning is tremendous. However, once past that, everything would feel incomplete since most assignments are spoon fed after that. Rating it four stars because of initial lectures.