Chevron Left
返回到 Sample-based Learning Methods

學生對 阿尔伯塔大学 提供的 Sample-based Learning Methods 的評價和反饋

4.7
927 個評分
192 條評論

課程概述

In this course, you will learn about several algorithms that can learn near optimal policies based on trial and error interaction with the environment---learning from the agent’s own experience. Learning from actual experience is striking because it requires no prior knowledge of the environment’s dynamics, yet can still attain optimal behavior. We will cover intuitively simple but powerful Monte Carlo methods, and temporal difference learning methods including Q-learning. We will wrap up this course investigating how we can get the best of both worlds: algorithms that can combine model-based planning (similar to dynamic programming) and temporal difference updates to radically accelerate learning. By the end of this course you will be able to: - Understand Temporal-Difference learning and Monte Carlo as two strategies for estimating value functions from sampled experience - Understand the importance of exploration, when using sampled experience rather than dynamic programming sweeps within a model - Understand the connections between Monte Carlo and Dynamic Programming and TD. - Implement and apply the TD algorithm, for estimating value functions - Implement and apply Expected Sarsa and Q-learning (two TD methods for control) - Understand the difference between on-policy and off-policy control - Understand planning with simulated experience (as opposed to classic planning strategies) - Implement a model-based approach to RL, called Dyna, which uses simulated experience - Conduct an empirical study to see the improvements in sample efficiency when using Dyna...

熱門審閱

AA
2020年8月11日

Great course, giving it 5 stars though it deserves both because the assignments have some serious issues that shouldn't actually be a matter. All the other parts are amazing though. Good job

KM
2020年1月9日

Really great resource to follow along the RL Book. IMP Suggestion: Do not skip the reading assignments, they are really helpful and following the videos and assignments becomes easy.

篩選依據:

176 - Sample-based Learning Methods 的 188 個評論(共 188 個)

創建者 Liam M

2020年3月26日

The assignments are an exercise in programming far more than they are a learning tool for RL. The course lectures are good, and I recommend auditing the course.

創建者 Marwan F A

2020年6月21日

The content is very helpful and clear, however, the notebook implementations are not so good and misleading sometimes.

創建者 Chan Y F

2019年11月4日

The video content is not elaborated enough, need to read the book and search on the web to understand the idea

創建者 Yetao W

2020年5月5日

The course is good , however the submission of result.zip is inconvenient

創建者 Jeel V

2020年6月13日

Videos can have a little bit more technical details for the algorithms

創建者 Duc H N

2020年2月2日

The last test is a little bit tricky

創建者 Sanat D

2020年7月29日

The reading material is great (as are the lectures), but frankly, the hypersensitive autograder is a real hinderance. Correct implementations don't get full points, and are sensitive to things like the order of random number generator calls, rather than looking for a correct range of solutions. To make things worse, the autograder has poor feedback - I often had to rely on assignment discussions with people who had received similarly unhelpful feedback to debug my solutions.

創建者 Vasilis V

2020年6月15日

Some explanations need should be broken down into smaller pieces

創建者 Chungeon K

2020年5月24日

너무 함축적입니다. 강의 시간이 늘어날 필요가 있을 것으로 보입니다.

創建者 Andreas B

2020年8月22日

I give the course a low rating for several reasons, the first being the most important one: The instructors basically completely absent. Having issues or problems? They don't bother. Not a single reply from either instructor in the forums for months or years. Second: Flawed and unprecise notebooks. Well known issues with random numbers, but no updates. Incorrect book references which will let you implement formulas other than intended. Third: Tons of short videos with 30% summary and "what you will learn", which is ridiculous for 3 minute videos. Fourth reason: Mathematical depth missing after the first subcourse. Suggestion: Watch the David Silver and Stanford youtube lessons instead. For free and better explained. Compared to, for instance, Andrew NGs specialization, this one is really bad mostly thanks to the complete disinterest of the instructors.

創建者 Mansour A K

2020年5月18日

This is one of the worst courses I have ever taken in my life. The videos don't contain much content and presenters just read them off with no clarification or explanation. Furthermore, the book is also shit (despite the fact that it's the gospel of RL). The writers of the book, who are two well-respected scientists, really suck at writing books. There is another course (or specialization) from the National Research University Higher School of Economics called "Practical Reinforcement Learning". You probably should check it out before you take this one.

創建者 JT

2020年12月23日

This course has been reorganized that reduced the contents from 5 weeks to 4 weeks during my break section. After I come back the course to finish my last assignment, my grades of previously finished assignments and exams are disappeared. I have to redo them again to pass the course. It should not happen like this way. Coursera system seems not so reliable and is better to make a backup of the materials on your computer. Please solve this technical error.

創建者 rafael l n

2021年4月29日

Didática zero, parece que ficam lendo o livro durante os vídeos. Acredito que deveriam colocar alguns exemplos mais palpáveis.