机器人如何实时确定他们的状态，并从带有噪声的传感器测量量获得周围环境的信息？在这个模块中，你将学习怎样让机器人把不确定性融入估计，并向动态和变化的世界进行学习。特殊专题包括用于定位和绘图的概率生成模型和贝叶斯滤波器。

Loading...

來自 University of Pennsylvania 的課程

机器人学：估计和学习

314 個評分

At Coursera, you will find the best lectures in the world. Here are some of our personalized recommendations for you

机器人如何实时确定他们的状态，并从带有噪声的传感器测量量获得周围环境的信息？在这个模块中，你将学习怎样让机器人把不确定性融入估计，并向动态和变化的世界进行学习。特殊专题包括用于定位和绘图的概率生成模型和贝叶斯滤波器。

從本節課中

Bayesian Estimation - Localization

We will learn about robotic localization. Specifically, our goal of this week is to understand a how range measurements, coupled with odometer readings, can place a robot on a map. Later in the week, we introduce 3D localization as well.

- Daniel LeeProfessor of Electrical and Systems Engineering

School of Engineering and Applied Science

Previously, we have seen a localization method based on

particles that have a three-dimensional state of x, y, and eo.

This captures a ground robot.

However, when not constrained to the ground,

a post has six degrees of freedom which requires exponentially more

points to be sampled in order to produce reasonable registration performance.

Instead of relying on particles, we can use a direct optimization

to find the registration between our measurements and the map.

After reviewing what we have learned in the previous weeks,

I will introduce the ICP, Iterative Closest Point algorithm,

as well as odometry for three dimensional localization.

Let us start with a brief review of the expectation maximization

algorithm from week one.

We have seen that this algorithm is useful for complicated optimizations,

such as the gaussian mixture model parameter estimation problem.

With the introduction of an initial guess coupled with the latent variable,

usually expressing membership.

We are able to obtain a local optimal solution for a given problem.

We will shortly see that the iterative closest point algorithm

works in the same fashion.

What we learned in week 3 is as follows.

First we observe a 3D point cloud from 3D sensors.

This figure shows an example from a depth camera.

A 3D map is usually represented as a tree structure instead of as a full grid

as done in two dimensions.

This is in order to have efficient means.

The map can keep the full precision of point location.

Due to the special organization, we can speed up the finding

of the closest point to a given point in each map update.

Now we will look at this in detail, we have two sets of points,

one of them is a point cloud as a measurement and

the other is a point cloud of the map model.

Our goal is to put the newly measured points into the right place on

the map model.

In order to do so, we need to find a rotation and

translation that move the measured points to match the model points.

Additionally, we need to know which measurement points correspond to

which points in the model.

We can visually detect the corresponding parts in this example.

However, for a robot with tens of thousands of noisy points it

is not obvious to see the possible matching pattern.

Now let's look at how ICP handles these

two problems in an expectation maximization framework.

The strategy of the ICP algorithm takes an optimistic

assumption that the point sets are close enough.

In this way we have a good prior of the rotation r and translation, t.

Under this assumption,

the correspondence of a point will be the closest point to that one.

In this way,

we will find closest points of all measured points corresponding to the map.

When a map has a tree structure,

this process is much faster than a brute force search.

Once we have our correspondences, we can enhance our estimate of R and

t, by solving this optimization.

If you are interested,

the cited paper gives details of the solution in order to obtain R and t.

It's good practice.

After each iteration We will have better and

better correspondences in addition to better registrations.

We iterate this until it converges.

Once it does, we can obtain the rotation and

translation between the two sets of points.

Let me show you an example with two sequential point clouds obtained from

a depth camera while a quadrotor is flying in a room.

Since the robot was moving in the yellow direction,

the scene from the body looks rotated around the z axis.

Now, we want to register the two sets to become a single, consistent point cloud.

This short animation shows how the ICP algorithm

converges to a local minimum after an iteration.

Through registration, we are computing the motion increment of the robot.

As we did in scan matching in the previous lecture, this gives a reference

location for the robot with respect to points that are known.

Although we need a pretty close initial alignment this algorithm

is widely used in many robotic applications,

including three dimensional simultaneous localization and mapping.

We have seen how the ICP algorithm could be used for localization with 3D sensors.

There are many variants of this algorithm according to

different optimization formulas different ways to choose correspondences and

different ways to reject bad points.

In general this has been a good overview of many different localization techniques.