Chevron Left
返回到 Natural Language Processing with Sequence Models

學生對 提供的 Natural Language Processing with Sequence Models 的評價和反饋

890 個評分
178 條評論


In Course 3 of the Natural Language Processing Specialization, you will: a) Train a neural network with GLoVe word embeddings to perform sentiment analysis of tweets, b) Generate synthetic Shakespeare text using a Gated Recurrent Unit (GRU) language model, c) Train a recurrent neural network to perform named entity recognition (NER) using LSTMs with linear layers, and d) Use so-called ‘Siamese’ LSTM models to compare questions in a corpus and identify those that are worded differently but have the same meaning. By the end of this Specialization, you will have designed NLP applications that perform question-answering and sentiment analysis, created tools to translate languages and summarize text, and even built a chatbot! This Specialization is designed and taught by two experts in NLP, machine learning, and deep learning. Younes Bensouda Mourri is an Instructor of AI at Stanford University who also helped build the Deep Learning Specialization. Łukasz Kaiser is a Staff Research Scientist at Google Brain and the co-author of Tensorflow, the Tensor2Tensor and Trax libraries, and the Transformer paper....




Overall it was great a course. A little bit weak in theory. I think for practical purposes whatever was sufficient. The detection of Question duplication was a very much cool model. I enjoy it a lot.



This is the third course of NLP Specialization. This was a great course and the instructors was amazing. I really learned and understand everything they thought like LSTM, GRU, Siamese Networks etc.


151 - Natural Language Processing with Sequence Models 的 175 個評論(共 186 個)

創建者 Ruiwen W


some errors in the assignments

創建者 V B



創建者 JJ Y


Sequence models are heavy subjects and it would be unrealistic to expect a 4-week course to go into all the depths of RNN, GRN, LSTM etc. This course does a great job covering important types of neural networks and showing their applications. However, the labs and assignments could have done more in (a) helping us look a little deeper into the implementations of different NN building components, and (b) aligning better with the lecture videos.

Really Good examples: Week1 labs and assignment illustrate the implementations of some of the basic layer classes, and outline the overall flow of NN training with Trax. Week4 labs and assignment illustrate the implementation of the loss layer based on the unique triple loss function.

Not so Good examples: Week1 uses a whole video explaining gradient calculation in Trax. Yet there is no illustration of how it's integrated in backward propagation in Trax. Week2 videos and the labs/assignment are more disjoint. There is a video explaining the scan() function, but it does not show up in the assignment at all.

創建者 Yaron K


The 4th week on Siamese networks was well done. The Weeks on RNN GRU and LSTMs basically gave the equations and some intuition but most of the emphasis was on building a model with them using Googles TRAX Deep learning Framework model. Which the lecturers believe to be better than Tenserflow2. At least when it comes to debugging - it isn't. Make the smallest error (say with shape parameters) - and you get a mass of error messages which don't really help. Now at least for shape errors there is no excuse for this - since all that is needed is to run checks on the first batch of the first epoch that pinpoint exactly where there's a shape discrepancy.

創建者 Amlan C


Despite the theoretical underpinings I do not feel this course lets you write an NER algo on your own . Majority of these courses have been using data Whats supplied by coursera and so is the case with models. In real life we have to either create this data or use some opensource data like from kaggle or whatever. I think it'd be better if we orient the course using publicly available appropriate data and models trained by students to be used for actual analysis.

創建者 Maury S


Like some of the other courses in this specialization, this one has promise but comes off as a so far somewhat careless effort compared to the usual quality of content from Andrew Ng. The lecturers are OK but not great, and it is unclear what the role of Lukasz Kaiser is beyond reading introductions to many of the lecture. There is a strange focus on simplifying with the Google Trax model at the cost of not really teaching the underlying maths.

創建者 Petru R


The course requires a solid background on deep learning, it does not explain in detail the LSTMs or how is the programming part keeping the weights of the 2 parts of the siamese network identical.

I​s Trax providing other ways of generating data for siamese networks for training other than writing a custom function?

創建者 Business D


I regret a lack of proper guidance in the coding exercises, compounded with the incomplete documentation of the trax library. I also feel we could build models with greater performance. An accuracy of 0.54 for the identification of question duplicates doesn't seem to be the state of the art...

You could do better!

創建者 Rajaseharan R


T​oo much focus on the Data generator in the assignments. There should be a library function in Trax to do it. Might have to do some data preparation before hand but the generator should be a standard library function. Also, I hoped to learn a bit more indepth in terms of entity labelling.

創建者 Huang J


The course videos are too short to convey the ideas behind the methodology. It requires understanding of the methodology before following the course material. Also, the introduction on Trax is fine, but would prefer to have a version of the assignments on TensorFlow.

創建者 A V A


Good course teaching the applicatons of LSTMs/GRUs in language generation, NER and for matching question duplicates using Siamese networks. Would have been more helpful if there was more depth in the topics.

創建者 J N B P


This course is good for practical knowledge with really good projects but it lags in the theoretical part you must be familiar with the concepts to get the most out of this course.

創建者 Nguyen B L


I am now confusing by too many Deep learning framework. Also the content is somehow repeated with the Deep learning specialization.

創建者 shinichiro i


I just want them to use Keras, since I have no inclination to study new shiny fancy framework such as Trax.

創建者 martin k


Lectures are quite good, but assignments are really bad. Not helpful at all

創建者 Deleted A


assignments were easy and similar.learned less than expected.

創建者 Alberto S


Content is interesting, but some details are under explained.

創建者 Ashim M


Would've been better with a better documented library.

創建者 Mahsa S


I prefer to learn more about nlp in pytorch

創建者 Leon V


Grader output could be more useful.

創建者 Kota M


Sadly, the quality of the material is much lower than the previous two courses. Assignment repeatedly asks to implement data generators with a lot of for-loops. We should focus more on the network architecture rather than python programming. That being said, the implementation is not good either. Learners would have to learn to program anyways.

創建者 Patrick C


Assignments are very difficult to complete because of inaccurate information (off by one on indices and other sloppy mistakes). You also don't learn much from them because almost all the code is already provided. It would be much better if they built up your understand from first principles instead of rushing through fill in the blank problems.

創建者 Mostafa E


The course did well in explaining the concepts of RNNs... but it may in fact have provided less knowledge than the NLP course in Deep Learning specialization.

I was looking forward to see more details on how translation works using LSTMs, go over some famous LSTM networks such as GNMT, and explain some accuracy measures such as the BLEU score.

創建者 Greg D


Spends a lot of time going over tedious implementation details rather than teaching interesting NLP topics and nuances, especially in the assignments. Introduction to Trax seems to be the only saving grace, one bonus star :)))).

For having Andrew Ng's course as suggested background for this course this is a big step (read as fall) down.

創建者 Artem R


Course could be completed without watching videos - just by using hints and comments in assignments, videos are short and shallow, choice of Deep Learning framework (TRAX) is questionable - I won't use it in production.

Despite the course is 4 weeks long it could be accomplished in 4 days - I don't feel that it was worth the time.