課程信息
38,867 次近期查看

第 3 門課程(共 4 門)

100% 在線

立即開始,按照自己的計劃學習。

可靈活調整截止日期

根據您的日程表重置截止日期。

高級

This is an advanced course, intended for learners with a background in computer vision and deep learning.

完成時間大約為20 小時

建議:6 weeks of study, 5-6 hours per week...

英語(English)

字幕:英語(English)

您將學到的內容有

  • Check

    Work with the pinhole camera model, and perform intrinsic and extrinsic camera calibration

  • Check

    Detect, describe and match image features and design your own convolutional neural networks

  • Check

    Apply these methods to visual odometry, object detection and tracking

  • Check

    Apply semantic segmentation for drivable surface estimation

第 3 門課程(共 4 門)

100% 在線

立即開始,按照自己的計劃學習。

可靈活調整截止日期

根據您的日程表重置截止日期。

高級

This is an advanced course, intended for learners with a background in computer vision and deep learning.

完成時間大約為20 小時

建議:6 weeks of study, 5-6 hours per week...

英語(English)

字幕:英語(English)

教學大綱 - 您將從這門課程中學到什麼

1
完成時間為 2 小時

Welcome to Course 3: Visual Perception for Self-Driving Cars

This module introduces the main concepts from the broad field of computer vision needed to progress through perception methods for self-driving vehicles. The main components include camera models and their calibration, monocular and stereo vision, projective geometry, and convolution operations.

...
4 個視頻 (總計 18 分鐘), 4 個閱讀材料
4 個視頻
Welcome to the course4分鐘
Meet the Instructor, Steven Waslander5分鐘
Meet the Instructor, Jonathan Kelly2分鐘
4 個閱讀材料
Course Prerequisites15分鐘
How to Use Discussion Forums15分鐘
How to Use Supplementary Readings in This Course15分鐘
Recommended Textbooks15分鐘
完成時間為 7 小時

Module 1: Basics of 3D Computer Vision

This module introduces the main concepts from the broad field of computer vision needed to progress through perception methods for self-driving vehicles. The main components include camera models and their calibration, monocular and stereo vision, projective geometry, and convolution operations.

...
6 個視頻 (總計 43 分鐘), 4 個閱讀材料, 2 個測驗
6 個視頻
Lesson 1 Part 2: Camera Projective Geometry8分鐘
Lesson 2: Camera Calibration7分鐘
Lesson 3 Part 1: Visual Depth Perception - Stereopsis7分鐘
Lesson 3 Part 2: Visual Depth Perception - Computing the Disparity5分鐘
Lesson 4: Image Filtering7分鐘
4 個閱讀材料
Supplementary Reading: The Camera Sensor30分鐘
Supplementary Reading: Camera Calibration15分鐘
Supplementary Reading: Visual Depth Perception30分鐘
Supplementary Reading: Image Filtering15分鐘
1 個練習
Module 1 Graded Quiz30分鐘
2
完成時間為 7 小時

Module 2: Visual Features - Detection, Description and Matching

Visual features are used to track motion through an environment and to recognize places in a map. This module describes how features can be detected and tracked through a sequence of images and fused with other sources for localization as described in Course 2. Feature extraction is also fundamental to object detection and semantic segmentation in deep networks, and this module introduces some of the feature detection methods employed in that context as well.

...
6 個視頻 (總計 44 分鐘), 5 個閱讀材料, 1 個測驗
6 個視頻
Lesson 2: Feature Descriptors6分鐘
Lesson 3 Part 1: Feature Matching7分鐘
Lesson 3 Part 2: Feature Matching: Handling Ambiguity in Matching5分鐘
Lesson 4: Outlier Rejection8分鐘
Lesson 5: Visual Odometry9分鐘
5 個閱讀材料
Supplementary Reading: Feature Detectors and Descriptors30分鐘
Supplementary Reading: Feature Matching15分鐘
Supplementary Reading: Feature Matching15分鐘
Supplementary Reading: Outlier Rejection15分鐘
Supplementary Reading: Visual Odometry10分鐘
3
完成時間為 3 小時

Module 3: Feedforward Neural Networks

Deep learning is a core enabling technology for self-driving perception. This module briefly introduces the core concepts employed in modern convolutional neural networks, with an emphasis on methods that have been proven to be effective for tasks such as object detection and semantic segmentation. Basic network architectures, common components and helpful tools for constructing and training networks are described.

...
6 個視頻 (總計 58 分鐘), 6 個閱讀材料, 1 個測驗
6 個視頻
Lesson 2: Output Layers and Loss Functions10分鐘
Lesson 3: Neural Network Training with Gradient Descent10分鐘
Lesson 4: Data Splits and Neural Network Performance Evaluation8分鐘
Lesson 5: Neural Network Regularization9分鐘
Lesson 6: Convolutional Neural Networks9分鐘
6 個閱讀材料
Supplementary Reading: Feed-Forward Neural Networks15分鐘
Supplementary Reading: Output Layers and Loss Functions15分鐘
Supplementary Reading: Neural Network Training with Gradient Descent15分鐘
Supplementary Reading: Data Splits and Neural Network Performance Evaluation10分鐘
Supplementary Reading: Neural Network Regularization15分鐘
Supplementary Reading: Convolutional Neural Networks10分鐘
1 個練習
Feed-Forward Neural Networks30分鐘
4
完成時間為 3 小時

Module 4: 2D Object Detection

The two most prevalent applications of deep neural networks to self-driving are object detection, including pedestrian, cyclists and vehicles, and semantic segmentation, which associates image pixels with useful labels such as sign, light, curb, road, vehicle etc. This module presents baseline techniques for object detection and the following module introduce semantic segmentation, both of which can be used to create a complete self-driving car perception pipeline.

...
4 個視頻 (總計 52 分鐘), 4 個閱讀材料, 1 個測驗
4 個視頻
Lesson 2: 2D Object detection with Convolutional Neural Networks11分鐘
Lesson 3: Training vs. Inference11分鐘
Lesson 4: Using 2D Object Detectors for Self-Driving Cars14分鐘
4 個閱讀材料
Supplementary Reading: The Object Detection Problem15分鐘
Supplementary Reading: 2D Object detection with Convolutional Neural Networks30分鐘
Supplementary Reading: Training vs. Inference45分鐘
Supplementary Reading: Using 2D Object Detectors for Self-Driving Cars30分鐘
1 個練習
Object Detection For Self-Driving Cars30分鐘
5
完成時間為 2 小時

Module 5: Semantic Segmentation

The second most prevalent application of deep neural networks to self-driving is semantic segmentation, which associates image pixels with useful labels such as sign, light, curb, road, vehicle etc. The main use for segmentation is to identify the drivable surface, which aids in ground plane estimation, object detection and lane boundary assessment. Segmentation labels are also being directly integrated into object detection as pixel masks, for static objects such as signs, lights and lanes, and moving objects such cars, trucks, bicycles and pedestrians.

...
3 個視頻 (總計 31 分鐘), 3 個閱讀材料, 1 個測驗
3 個視頻
Lesson 2: ConvNets for Semantic Segmentation11分鐘
Lesson 3: Semantic Segmentation for Road Scene Understanding11分鐘
3 個閱讀材料
Supplementary Reading: The Semantic Segmentation Problem30分鐘
Supplementary Reading: ConvNets for Semantic Segmentation30分鐘
Supplementary Reading: Semantic Segmentation for Road Scene Understanding30分鐘
1 個練習
Semantic Segmentation For Self-Driving Cars20分鐘
6
完成時間為 7 小時

Module 6: Putting it together - Perception of dynamic objects in the drivable region

The final module of this course focuses on the implementation of a collision warning system that alerts a self-driving car about the position and category of obstacles present in their lane. The project is comprised of three major segments: 1) Estimating the drivable space in 3D, 2) Semantic Lane Estimation and 3) Filter wrong output from object detection using semantic segmentation.

...
4 個視頻 (總計 24 分鐘), 1 個測驗
4 個視頻
Final Project Hints6分鐘
Final Project Solution [LOCKED]9分鐘
Congratulations for completing the course!3分鐘

講師

Avatar

Steven Waslander

Associate Professor
Aerospace Studies

關於 多伦多大学

Established in 1827, the University of Toronto is one of the world’s leading universities, renowned for its excellence in teaching, research, innovation and entrepreneurship, as well as its impact on economic prosperity and social well-being around the globe. ...

關於 自动驾驶汽车 專項課程

Be at the forefront of the autonomous driving industry. With market researchers predicting a $42-billion market and more than 20 million self-driving cars on the road by 2025, the next big job boom is right around the corner. This Specialization gives you a comprehensive understanding of state-of-the-art engineering practices used in the self-driving car industry. You'll get to interact with real data sets from an autonomous vehicle (AV)―all through hands-on projects using the open source simulator CARLA. Throughout your courses, you’ll hear from industry experts who work at companies like Oxbotica and Zoox as they share insights about autonomous technology and how that is powering job growth within the field. You’ll learn from a highly realistic driving environment that features 3D pedestrian modelling and environmental conditions. When you complete the Specialization successfully, you’ll be able to build your own self-driving software stack and be ready to apply for jobs in the autonomous vehicle industry. It is recommended that you have some background in linear algebra, probability, statistics, calculus, physics, control theory, and Python programming. You will need these specifications in order to effectively run the CARLA simulator: Windows 7 64-bit (or later) or Ubuntu 16.04 (or later), Quad-core Intel or AMD processor (2.5 GHz or faster), NVIDIA GeForce 470 GTX or AMD Radeon 6870 HD series card or higher, 8 GB RAM, and OpenGL 3 or greater (for Linux computers)....
自动驾驶汽车

常見問題

  • 注册以便获得证书后,您将有权访问所有视频、测验和编程作业(如果适用)。只有在您的班次开课之后,才可以提交和审阅同学互评作业。如果您选择在不购买的情况下浏览课程,可能无法访问某些作业。

  • 您注册课程后,将有权访问专项课程中的所有课程,并且会在完成课程后获得证书。您的电子课程证书将添加到您的成就页中,您可以通过该页打印您的课程证书或将其添加到您的领英档案中。如果您只想阅读和查看课程内容,可以免费旁听课程。

還有其他問題嗎?請訪問 學生幫助中心