Chevron Left
返回到 Sample-based Learning Methods

學生對 阿尔伯塔大学 提供的 Sample-based Learning Methods 的評價和反饋

860 個評分
173 條評論


In this course, you will learn about several algorithms that can learn near optimal policies based on trial and error interaction with the environment---learning from the agent’s own experience. Learning from actual experience is striking because it requires no prior knowledge of the environment’s dynamics, yet can still attain optimal behavior. We will cover intuitively simple but powerful Monte Carlo methods, and temporal difference learning methods including Q-learning. We will wrap up this course investigating how we can get the best of both worlds: algorithms that can combine model-based planning (similar to dynamic programming) and temporal difference updates to radically accelerate learning. By the end of this course you will be able to: - Understand Temporal-Difference learning and Monte Carlo as two strategies for estimating value functions from sampled experience - Understand the importance of exploration, when using sampled experience rather than dynamic programming sweeps within a model - Understand the connections between Monte Carlo and Dynamic Programming and TD. - Implement and apply the TD algorithm, for estimating value functions - Implement and apply Expected Sarsa and Q-learning (two TD methods for control) - Understand the difference between on-policy and off-policy control - Understand planning with simulated experience (as opposed to classic planning strategies) - Implement a model-based approach to RL, called Dyna, which uses simulated experience - Conduct an empirical study to see the improvements in sample efficiency when using Dyna...



Great course, giving it 5 stars though it deserves both because the assignments have some serious issues that shouldn't actually be a matter. All the other parts are amazing though. Good job


Really great resource to follow along the RL Book. IMP Suggestion: Do not skip the reading assignments, they are really helpful and following the videos and assignments becomes easy.


151 - Sample-based Learning Methods 的 170 個評論(共 170 個)

創建者 Soren J


Very good. Although the python skills are quite high to pass this course.

創建者 Yu G


Tough, challenging course, very worthwhile taking!

創建者 Sachin K


Passing notebook assignments is hellish due to strict decimal matching for numerical computations. You must do steps in one specific order or the assignments in autograder comparisons won't work. The course is itself fine and is more or less a rehash of the book so you may as well read that. There is no special intuition but the notebooks do provide a good experimental design strategy. Many of the experiments listed in the book are actually implemented in assignments which aids in learning. There is no technical support staff on Coursera anymore. So you are on your own when taking the course. Discussions forums are littered with discussion prompts and new ones are added every week so its not easy to find anything in there. Coursera has become substandard and the rating reflects a mixture of the course and coursera as a platform.

創建者 Mark L


This course has presented a large number of techniques/algorithms in addition to the ones presented in the first course. I find it hard to keep track of these. It would be most helpful if the techniques could be summarized in a table to lists the various attributes. In addition, I would like to see some examples of practical problems that can be solved with these techniques in addition to the explanatory "toy" problems. I also find the pace of the lectures a little "choppy", with a lot of very small lectures, each with its own introduction and summary.

創建者 Hadrien H


Still very good course but I felt like this second unit covers less of the book than the first one. The classes are quite shorter than in the first part while the book content gets richer. The assignments are a bit more complete though

創建者 Mukesh


There should be more examples on Q-learning and Expected SARSA. The course just compares different algorithms for different parameters. The autograder is annoying too. Really need some work on that. Otherwise the course is okay.

創建者 Alessandro o


To be honest I think that arguments quite complex are treated too quickly and basically it's up to you to figure it out. I think that some ideas would have been nice to have a more detailed explanation

創建者 Pratik S


The duration of the lectures was very very short. They were for 5-7mins, in which 1-2 min was overview and summary. Had the lectures been more longer, more examples could have been explained.

創建者 Liam M


The assignments are an exercise in programming far more than they are a learning tool for RL. The course lectures are good, and I recommend auditing the course.

創建者 Marwan F A


The content is very helpful and clear, however, the notebook implementations are not so good and misleading sometimes.

創建者 Chan Y F


The video content is not elaborated enough, need to read the book and search on the web to understand the idea

創建者 Yetao W


The course is good , however the submission of is inconvenient

創建者 Jeel V


Videos can have a little bit more technical details for the algorithms

創建者 Duc H N


The last test is a little bit tricky

創建者 Sanat D


The reading material is great (as are the lectures), but frankly, the hypersensitive autograder is a real hinderance. Correct implementations don't get full points, and are sensitive to things like the order of random number generator calls, rather than looking for a correct range of solutions. To make things worse, the autograder has poor feedback - I often had to rely on assignment discussions with people who had received similarly unhelpful feedback to debug my solutions.

創建者 Vasilis V


Some explanations need should be broken down into smaller pieces

創建者 Chungeon K


너무 함축적입니다. 강의 시간이 늘어날 필요가 있을 것으로 보입니다.

創建者 Andreas B


I give the course a low rating for several reasons, the first being the most important one: The instructors basically completely absent. Having issues or problems? They don't bother. Not a single reply from either instructor in the forums for months or years. Second: Flawed and unprecise notebooks. Well known issues with random numbers, but no updates. Incorrect book references which will let you implement formulas other than intended. Third: Tons of short videos with 30% summary and "what you will learn", which is ridiculous for 3 minute videos. Fourth reason: Mathematical depth missing after the first subcourse. Suggestion: Watch the David Silver and Stanford youtube lessons instead. For free and better explained. Compared to, for instance, Andrew NGs specialization, this one is really bad mostly thanks to the complete disinterest of the instructors.

創建者 Mansour A K


This is one of the worst courses I have ever taken in my life. The videos don't contain much content and presenters just read them off with no clarification or explanation. Furthermore, the book is also shit (despite the fact that it's the gospel of RL). The writers of the book, who are two well-respected scientists, really suck at writing books. There is another course (or specialization) from the National Research University Higher School of Economics called "Practical Reinforcement Learning". You probably should check it out before you take this one.

創建者 JT


This course has been reorganized that reduced the contents from 5 weeks to 4 weeks during my break section. After I come back the course to finish my last assignment, my grades of previously finished assignments and exams are disappeared. I have to redo them again to pass the course. It should not happen like this way. Coursera system seems not so reliable and is better to make a backup of the materials on your computer. Please solve this technical error.