机器人如何实时确定他们的状态，并从带有噪声的传感器测量量获得周围环境的信息？在这个模块中，你将学习怎样让机器人把不确定性融入估计，并向动态和变化的世界进行学习。特殊专题包括用于定位和绘图的概率生成模型和贝叶斯滤波器。

Loading...

來自 University of Pennsylvania 的課程

机器人学：估计和学习

315 個評分

At Coursera, you will find the best lectures in the world. Here are some of our personalized recommendations for you

机器人如何实时确定他们的状态，并从带有噪声的传感器测量量获得周围环境的信息？在这个模块中，你将学习怎样让机器人把不确定性融入估计，并向动态和变化的世界进行学习。特殊专题包括用于定位和绘图的概率生成模型和贝叶斯滤波器。

從本節課中

Mapping

We will learn about robotic mapping. Specifically, our goal of this week is to understand a mapping algorithm called Occupancy Grid Mapping based on range measurements. Later in the week, we introduce 3D mapping as well.

- Daniel LeeProfessor of Electrical and Systems Engineering

School of Engineering and Applied Science

In this lecture, I will continue talking about occupancy grid mapping.

We will see how to incorporate range sensors.

In the previous lecture,

we had this example of two sequential updates of the map.

This example, you readily localize the robot on the global map

assuming the pose of the robot is known.

We are going to keep the assumption in order to focus on the mapping problem.

However, we still need to transform the sensor data

measured in the body frame into the global map frame.

If you're familiar with coordinate transformations between 2 different frames

and dealing with 2D arrays, then you may skip this lecture.

Let me show what coordinate frames we have.

This is the map and the coordinate frame is attached to it.

The origin does not need to be the top left corner.

This follows the MATLAB convention represent an image array.

This is our mobile robot.

We may assign a body fix coordinate frame as shown.

This example,

the ray from the range sensor is emitted along the first axis of the body frame.

Let's imagine that we received a distance measurements d.

Then, how can we tell which cell is hit by the ray, and

which cells are observed to be free?

To answer the questions, we will first consider a continuous version of the map.

And we turn to the discretizedmap later.

If we know where the robot is located on the map, and

which direction it is heading, then we can place the robot on the map like this.

To find the coordinates of the obstructing point,

we use the distance measurements in the known pose of the robot.

Up to this point, we use the continuous representation for the position.

However, the map we are going to build should be represented as

discretized cells of a certain resolution.

Let's think about how to express a location on the grid.

Consider a 1D case and take the example where the grid resolution is 10 centimeters.

Any point between 0 and

10 on the continuous domain will be mapped to the index 1.

In this example, we follow the MATLAB convention where an index starts from 1.

A point between 10 and 20 centimeters will be mapped to the index 2.

We can continue to obtain the index of any point within the given line segment.

Or another example, the grid revolution can be 7 centimeters.

In this case, any point between 0 and 7 centimeters on

the continuous domain, will be mapped to the index 1.

A point between 7 and 14 will be mapped to the index 2.

For a given r, the index of a point x is

computed as the ceiling integer of x over r.

To generalize, if a line segment starts from some other value than 0,

then we just need to subtract the minimum value from x, when we compute the index.

Getting back to our 2D map, computing the position of the grid map does

not get more complicated than the 1D case.

Each component, X1 and X2, can be treated independently for

the computation of the index pair, I1 and I2.

Let me summarize how to obtain the 2D grid indices of the occupied cell.

What is given is the distance measurement d and the pose of the robot.

From that, we can first compute the location of

the obstructing point on the continuous domain.

Then, we compute the indices of the point on the discretized map of resolution R.

Next, we will need to calculate the indices of free empty cells.

For each, we are going to use Bresenham's line algorithm.

We will now talk about the algorithm itself here.

An implementation of this algorithm will be provided as a helper function.

Basically the algorithm takes two points as the input argument and returns a list of

cells that forms an approximation for a line segment between the two points.

So far we have treated a sensor as a single ray.

But in more realistic settings,

it has multiple rays emitted in different directions.

Let's look at those cases.

Let's say the sensor emits five rays in the indicated heading angles alphas.

With respect to the body frame.

Each ray gives the distance as shown.

If the poles of the robot is known as before,

the position of the cells hit by the rays can be computed in the same way.

But in this case, the direction of each ray is taken into the computation.

You can use the Bresenham's algorithm for individual ways to find free empty cells.

And take the union of them.

We have seen examples to learn how to interpret range sensor readings.

Now, I suppose you are ready to start the programming assignment for this week.

I will also continue to introduce how 3D mapping differs from 2D

mapping in the next lecture.