Chevron Left
返回到 Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning

學生對 提供的 Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning 的評價和反饋

13,218 個評分
2,803 條評論


If you are a software developer who wants to build scalable AI-powered algorithms, you need to understand how to use the tools to build them. This course is part of the upcoming Machine Learning in Tensorflow Specialization and will teach you best practices for using TensorFlow, a popular open-source framework for machine learning. The Machine Learning course and Deep Learning Specialization from Andrew Ng teach the most important and foundational principles of Machine Learning and Deep Learning. This new TensorFlow Specialization teaches you how to use TensorFlow to implement those principles so that you can start building and applying scalable models to real-world problems. To develop a deeper understanding of how neural networks work, we recommend that you take the Deep Learning Specialization....



Good intro course, but google colab assignments need to be improved. And submitting a jupyter notebook was much more easier, why would I want to login to my google account to be a part of this course?


Great course to get started with building Convolutional Neural Networks in Keras for building Image Classifiers. This is probably the best way to get beginners into Deep Learning for Computer Vision.


2176 - Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning 的 2200 個評論(共 2,796 個)

創建者 zhenzhen w



創建者 M n n



創建者 申桂可



創建者 Jurassic



創建者 Rupsa R



創建者 Nazarii N



創建者 Dr S P



創建者 Sui X



創建者 ankit k



創建者 Ashok R A



創建者 Mohamed M



創建者 keshav b



創建者 Srijeet C




創建者 林韋銘



創建者 Ruben M



創建者 Raman M



創建者 Richard H


Great introduction to using Tensorflow to implement convolutional networks.

I took the Stanford course by Andrew Ng first, so many of the concepts were very familiar - in some cases, the detail was just a little bit shallow - probably to avoid interfering with getting on with implementation - but this course certainly had references outside the course to some more detailed information on topics like how convolutions help identify features or the learning factor.

The jupiter notebooks were great in that you don't need to worry about the environment much - it's already set up - a big worry for me for many of these types of courses. But there were quirks, and a few times I (and some of the other students) could get tripped up for a little while. If you are a developer like I used to be, then troubleshooting and debugging environment/code issues is a small hurdle though.

Kudos to the instructors and those that set up the course - this is otherwise very hard material to teach and set up good "hands on" evaluation, which they did really well, a couple kinks aside.

創建者 César R P


The course is a great introduction to the use of tensorflow. Keep in mind, this course is a practisioners guide to Deep Learning, so there's not much theory involved, you just get an intuition to how things work. I'd say this is a great start to the specialization, which I feel will probably compliment Andrew Ng's specialization greatly (that one delves into the theory, but the code you make isn't really what one would use in day to day ML projects).

The only thing I didn't like (why I docked a star) where the programming assignments. Too easy, and the autograder barely checks anything. You need to explore thigs by yoursefl and be disciplined, as the programming excercises let you get away with anything. That said, that seems like it only happens in this first course (maybe as a way to encourage people to keep moving forward), and the excercises get better in what I've seen of the next course.

All in all, great course. Mr Moroney is one of the best teachers I've ever see, and communicates his knowledge and pasion with great ease.



This is a great course with very useful lessons that helps the students feel confident about implementing Deep Learning solutions. It is a perfect follow up for Deep Learning Specialization which lays down the theoretical foundations. The instructor is great, and he talks about real world problems (not just Fashon MNIST but non centered, colored and large images) and explains them very clearly.

There is some amount of lack of attention to details in the course which manifest itself specially in the code (typos, code and code comments not agreeing with each other, and entire lessons which are slotted for 10 minutes or more but dont have any action other than pressing the "mark as complete" button, which makes you feel that you are missing something. Also the discussion board isnt as responsive (especially moderators) as the other courses have been in the past.

創建者 Ilya R


I like Laurence's teaching style. This isn't the first his course I've taken. It's nice that he has some interesting datasets of his own and some research questions. But I have a couple of suggestions to this course.

First of all tensorflow documentation has a lot (and I mean A LOT) of good tutorials so I'd expect some of them to be included in the course. That what you can expect of Google's developer advocate to do. I really need some help in understanding those tutorials.

The second suggestion - the course is far too basic. That's probably OK but we really need a follow-up course to dig much deeper into, image processing and custom metrics and losses as an example. It's really not enough background to really reproduce results from say AI for medicine courses that you may get from this series.

創建者 John S


The course does a good job of teaching basic TF functionality. It doesn't go deep into actually how NNs work which I supposed is fine if you already have that knowledge. The exercises are a little finicky when it comes to grading. My biggest hangup is that the time they calculate for each week is extremely over-estimated. Half the modules are "Readings" which they allot "10 minutes" for and most of these are a single paragraph or just a rehashing of what was just said in the previous video. The exercises also shouldn't take anyone near 3 hours to complete. So, keep in mind that each week's material can be completed in probably 30-45 minutes at most...not 6 hours!

創建者 Avinash M


I thoroughly enjoyed the course and programming various CNNs on TensorFlow. However, in certain lectures (especially the ones with the horse/human data sets), the instructor could spend some more time explaining the process of downloading and storing the training and validation images. It took me some effort and quite a lot of Googling to figure out those parts of the code. While that might not be directly related to the task at hand (binary classification) it is, in my opinion, necessary to understand some of these ancillary tasks as well. Perhaps these explanations could be included as optional videos for those who wish to understand these features of TF.

創建者 Kaustubh D


This is an excellent course to get hands-on. Keeping some tasks as repetitive like those of the callback functions help make the person strongly hands-on and remember them. Just the way, every week's programming assignment involved writing the callback function, if there would be other TF functions/methods that the coder gets to implement and override and other TF abstract classes to extend from, that would have been cherry on top!

Drilling down from the bigger picture of model definition to seemed extremely useful.

And since there are tons of courses on theory of ML and DL, thank god this one just focusses on coding it out.

創建者 Egor E


I like structure and content of this introdactory course. And like the easy and clear way Laurence Moroney told about all this stuff. Particulary, I like clear formulated exercises. During course we got great bulk of working examples in jupiter notebookes, containg full lecture, notes, likns to supporting materials!

What I would improve in course it is the change a litle bit a balance from solving problems to technical implementation. We learn a lot of using CNN for image recognition. However, it would be great to listen in more details about calculating the shape for input and outputs for layers.