Chevron Left
返回到 Sample-based Learning Methods

學生對 阿尔伯塔大学 提供的 Sample-based Learning Methods 的評價和反饋

4.8
149 個評分
33 個審閱

課程概述

In this course, you will learn about several algorithms that can learn near optimal policies based on trial and error interaction with the environment---learning from the agent’s own experience. Learning from actual experience is striking because it requires no prior knowledge of the environment’s dynamics, yet can still attain optimal behavior. We will cover intuitively simple but powerful Monte Carlo methods, and temporal difference learning methods including Q-learning. We will wrap up this course investigating how we can get the best of both worlds: algorithms that can combine model-based planning (similar to dynamic programming) and temporal difference updates to radically accelerate learning. By the end of this course you will be able to: - Understand Temporal-Difference learning and Monte Carlo as two strategies for estimating value functions from sampled experience - Understand the importance of exploration, when using sampled experience rather than dynamic programming sweeps within a model - Understand the connections between Monte Carlo and Dynamic Programming and TD. - Implement and apply the TD algorithm, for estimating value functions - Implement and apply Expected Sarsa and Q-learning (two TD methods for control) - Understand the difference between on-policy and off-policy control - Understand planning with simulated experience (as opposed to classic planning strategies) - Implement a model-based approach to RL, called Dyna, which uses simulated experience - Conduct an empirical study to see the improvements in sample efficiency when using Dyna...

熱門審閱

KN

Oct 03, 2019

Great course! The notebooks are a perfect level of difficulty for someone learning RL for the first time. Thanks Martha and Adam for all your work on this!! Great content!!

IF

Sep 29, 2019

Great course. Clear, concise, practical. Right amount of programming. Right amount of tests of conceptual knowledge. Almost perfect course.

篩選依據:

1 - Sample-based Learning Methods 的 25 個評論(共 32 個)

創建者 Manuel V d S

Oct 04, 2019

Course was amazing until I reached the final assignment. What a terrible way to grade the notebook part. Also, nobody around in the forums to help... I would still recommend this to anyone interested, unless you have no intention of doing the weekly readings.

創建者 Kaiwen Y

Oct 02, 2019

I spend 1 hour learning the material and coding the assignment while 8 hours trying to debug it so that the grader will not complain. The grader sometimes insists on a particular order of the coding which does not really matter in the real world. Also, grader inconsistently gives 0 marks to a particular part of the problem while give a full mark on other part using the same function. (Like numpy.max) However, the forum is quite helpful and the staff is generally responsive.

創建者 LuSheng Y

Sep 10, 2019

Very good.

創建者 Luiz C

Sep 13, 2019

Great Course. Every aspect top notch

創建者 Ashish S

Sep 16, 2019

A good course with proper Mathematical insights

創建者 Sohail

Oct 07, 2019

Fantastic!

創建者 koji t

Oct 07, 2019

I made a lot of mistakes, but I learned a lot because of that.

It ’s a wonderful course.

創建者 Alejandro D

Sep 19, 2019

Excellent content and delivery.

創建者 Sodagreenmario

Sep 18, 2019

Great course, but there are still some little bugs that can be fixed in notebook assignments.

創建者 Stewart A

Sep 03, 2019

Great course! Lots of hands-on RL algorithms. I'm looking forward to the next course in the specialization.

創建者 Mark J

Sep 23, 2019

In my opinion, this course strikes a comfortable balance between theory and practice. It is, essentially, a walk-through of the textbook by Sutton and Barto entitled, appropriately enough, 'Reinforcement Learning'. Sutton's appearances in some of the videos are an added treat.

創建者 Ivan S F

Sep 29, 2019

Great course. Clear, concise, practical. Right amount of programming. Right amount of tests of conceptual knowledge. Almost perfect course.

創建者 Damian K

Oct 05, 2019

Great balance between theory and demonstration of how all techniques works. Exercises are prepared so it is possible to focus on core part of concepts. And if you will you can take deep dive into exercise and how experiments are designed. Very recommended course.

創建者 Kyle N

Oct 03, 2019

Great course! The notebooks are a perfect level of difficulty for someone learning RL for the first time. Thanks Martha and Adam for all your work on this!! Great content!!

創建者 Alberto H

Oct 28, 2019

A great step towards the acquisition of basic and medium complexity RL concepts with a nice balance between theory and practice, similar to the first one.

[Note: the course requires mastering the concepts of the first one in the specialization, so don't start here unless you're sure you master its contents.]

創建者 Ignacio O

Oct 13, 2019

Great, informative and very interesting course.

創建者 David P

Nov 03, 2019

Really a wonderful course! Very professional and high level.

創建者 Wang G

Oct 19, 2019

Very Nice Explanation and Assignment! Look forward the next 2 courses in this specialization!

創建者 Sriram R

Oct 21, 2019

Well done mix of theory and practice!

創建者 AhmadrezaSheibanirad

Nov 10, 2019

This course doesn't cover all concept of Sutton book. like n-step TD (chapter7) or some Planning and Learning with Tabular Methods (8-5, 8-6, 8-7, 8-8, 8-9, 8-10, 8-11), but what they teach you and cover are so practical, complete and clear.

創建者 Shi Y

Nov 10, 2019

最喜欢的Coursera课程之一,难度适中的RL课程,非常推荐,学习到了很多自学很难理解全面的知识。感谢老师和助教们!

創建者 John H

Nov 10, 2019

It was good.

創建者 Rashid P

Nov 12, 2019

Best RL course ever done

創建者 Alex E

Nov 19, 2019

A fun an interesting course. Keep up the great work!

創建者 Neil S

Sep 12, 2019

This is THE course to go with Sutton & Barto's Reinforcement Learning: An Introduction.

It's great to be able to repeat the examples from the book and end up writing code that outputs the same diagrams for e.g. Dyna-Q comparisons for planning. The notebooks strike a good balance between hand-holding for new topics and letting you make your own msitakes and learn from them.

I would rate five stars, but decided to drop one for now as there are still some glitches in the coding of Notebook assignments, requiring work-arounds communicated in the course forums. I hope these will be worked on and the course materials polished to perfection in future.