A truly great course, focuses on the details you need, at a good pace, building up the foundations needed before relying more heavily on libraries an abstractions (which I assume will follow).
A neatly organized course introducing the students to basics of Processing text data, learning word embedding and most importantly on how to interpret the word embedding. Great Job!!
創建者 sukanya n•
Viterbi algorithm could be explained better and Week 4 seemed very rushed with lots of details just glossed over. The assignment of week 4 compared to previous weeks seemed pretty easy.
創建者 Dan C•
Lots of Quality Control issues. using paid customers as proofreaders is tacky.
創建者 Kabakov B•
It is the worst course on deeplearning.ai ever. It is too simple for those who already took DL specialization and too difficult for new ones. The 'lectures' are too superficial and you will barely understand a thing. But tasks are huge -- a lot of spaghetti code with few levels of enclosed IF's, with constructions like A[i][j:k][l+1]. You will spend your time doing the bad implementation of 100K times implemented things that will not provide you with enlightenment on how they are implemented because of a lack of the theory. And nobody will teach you to use standard tools on simple and understandable examples.
創建者 Oleh S•
This course is more mature than the first one. Materials are very interesting and provide nice intuition for the probabilistic models. One can study the basics of auto-correction, Markov and Hidden Markov Models as well as the N-gram models and very important approach - Word2Vec, which is the essential part of the modern deep learning algorithms. I really enjoyed this part.
However, there are some minor suggestions:
1. Lectures duration could be longer - it will help to provide more depth in materials in both math and code side. I know that this is a simple version of real academic course, but in order to increase the quality you should consider the increasing duration;
2. Programming assignments are not balanced and there are still some minor ambiguities. For instance, the first and HMM assignments are tough, whereas the last one is a piece of cake.
3. The course can be enhanced with the additional part dedicated to Probability Theory, maybe a few lectures more.
I recommend this course to everyone interested in NLP. Note, you should read and study the additional resources to reinforce your knowledge, here is just the starting point for a good researcher. Keep going, guys!
創建者 Gabriel T P C•
To lessons are shallow, exercises to repetitive.
Homework is too easy. The answers are pretty much given to us.
創建者 Manik S•
Although the content is great but the way of teaching is lacking relative to how Andrew teaches in Deep Learning specialization. More interactive teaching with a pen tablet would be more engaging. The whole course seems like a recital of the slides. And the intructor's voice is a little bit irritating to listen to over longer durations. Otherwise the course provides a lot of learning if you can bear it.
創建者 Zhendong W•
A great course indeed! However, it would be even nicer to have the lecture videos in a slower pace, maybe go through the examples in more detail. Sometimes it felt too quick to jump directly through the theory to examples.
創建者 Mark M•
This second course like the first feels like a first or second year university course. Sometimes the explanations are weak or missing. There was no explanation for why the Viterbi algorithm works, no explanation for how to decide which embedding extraction method (W1 columns, W2 rows, or average of the two) method to use. There seemed to be little or no TA support. Many people were posting questions and not receiving answers from TAs. I posted the mistakes I identified in the course content, but I don't think anyone is going to act on this. It would have been good if the last exercise were repeated in Tensorflow. Also it would have been good to actually use the embeddings for something in the last exercise. From the PCA graph, the embeddings looked pretty poor.
創建者 Greg D•
The lecture videos are slow and shallow with little focus on building intuition. Similar with the assignments, instead of relying on existing libraries (that are popular for a reason) it painfully goes through implementing things in detail which doesn't really help you in any way later on.
100% recommend to save your time and money (and the sanity wasted on
meticulously hand-rolling things) on this and do something else instead
創建者 John A J•
It was a good course to introduce on AutoCorrect, AutoComplete, and Create your own Word Embeddings. However, I feel that the instructor focused too much on the implementation details. The concepts of why the pioneers are able to formulate the solution or train of thought for the different algorithms is lost. Although it taught me a little bit of implementation, but for me the implementation is just cherry on top as these things can easily be googled. So, It would have a better impact if it also teaches the concepts/thinking behind these algorithm so that I can re-use its underlying idea. Overall, it is good course to get started.
創建者 Laurence G•
The material covered provides a good tour of probabilistic language models, however the course needs work. Some issues were: Excessive reading off of mathematical formulas without providing the intuition behind it, the section on Viterbi was awful, a large chunk of week 4 could be replaced with a single block of pytorch/tensorflow with a note saying: "For more detail go take the deep learning course".
創建者 Andreas B•
Too many autograder issues. For instance in week 4 even if all code is correct, you get incorrect error messages about results being expected a completely incorrect type. Also, some minor maths errors and missing deeper insights concerning maths and motivations.
創建者 François D•
Great teacher, good pace in lectures and assignments. There are of course some redundancies wrt the previous specializations but it's nice to feel that you understand the content a bit better every time. Didn't find the forums (internal & slack) very useful, could be better structured. Can't wait for the next 2 courses.
創建者 Manzoor A•
Excellent! I know this course is the beginning of my NLP journey, but I can't expect more than this . The ungraded labs are very useful to practice and then apply it to the assignment. I am giving 5 star because There is only 5.
創建者 Sohail Z•
Brilliant course!!!! love it every aspect of the course. i am really grateful to the deeplearning.ai team for such amazing courses. they are easy to digest and provide sufficient math knowledge to understand the models.
創建者 Alan K F G•
Professor Younes really makes easier for me to go along the lectures and to be focus. The structure of the course helped me a lot to constantly review the same concepts as I went further in order to learn new things.
創建者 Saurabh K•
I have a wonderful experience. Try not to look at the hints, resolve yourself, it is excellent course for getting the in depth knowledge of how the black boxes work. Happy learning.
創建者 Kritika M•
This course is great. Actually the NLP specialization so far has been really good. The lectures are short and interesting and you get a good grasp on the concepts.
創建者 Andrei N•
A great course in the very spirit of the original Andrew Ng's ML course with lots of details and explanations of fundamental approaches and techniques.
創建者 Minh T H L•
Thanks for sharing your knowledge. I am happy during the course and I also leave a couple of feedback for minor improvement. All the best.
創建者 Ajay D•
Course was very insightful about the latest enhancements in the field of NLP. The exercises designed was very hands on and I loved that. However I felt a bit incomplete as I didn't see any large dataset in action, maybe my expectation was wrong. I was also wondering if I can get to see some more applications of these language model and word embeddings in the course.
創建者 Kravchenko D•
Nice course, but assignments in this course are less practical than in the first course of this specialization. The last assignment in this course was implementing the word embeddings generation using your own neural network. The whole process of writing your own neural network is nice except the resulting word embeddings that look very bad and ungrouped on the plot and the text in the notebook says: "You can see that woman and queen are next to each other. However, we have to be careful with the interpretation of this projected word vectors" without an explanation of what's wrong with the results. So I think that the last assignment should be reworked by the reviewers to have illustrative results at the end, not just "Word embeddings at the end are bad. Bye-Bye, see you in the next course"
創建者 Vitalii S•
Good information and presentation, but one should work on functions grading.
The problem is when you did something wrong in function C7, but from code and results point of view grader thinks it is OK. Only at C11 you figure out that something went wrong and waste time searching what is incorrect, all way down to the C7.
I think you should make kind of unit testing of a functions to make sure that it is really correct.
創建者 Kartik C•
The content and lectures are very good, but the assignments are overly restrictive, forcing one to do it in exactly one way and giving them no room to try (and maybe fail sometimes) while exploring different ways of doing something. Feels like during the assignments you are not learning anything just doing what you are being told to do.