3. 3. Sensor preprocessing
The “early” stage of perception requires data preprocessing
and fusion. The most common form of fusion arises in the
vehicle pose estimation, where “pose” comprises the vehicle
coordinates, orientation (yaw, roll, pitch), and velocity. This
is achieved via Kalman filters that integrate GPS measurements, wheel odometry, and inertial measurements. 13
Further preprocessing takes place for the environment
sensor data (laser, radar, camera images). Stanley integrates
laser data over time into a 3-D point cloud, as illustrated in
Figure 3. The point cloud is then analyzed for vertical obsta-
cles, resulting in 2-D maps as shown in Figure 3. Because of
the noise in sensor measurements, the actual test for the pres-
ence of a vertical obstacle is a probabilistic test. 14 This test
computes the probability of the presence of an obstacle, con-
sidering potential pose measurement errors. When this prob-
ability exceeds a threshold, the map is marked “occupied.”
A similar analysis takes place in Junior. Figure 4 illustrates
a scan analysis, where adjacent scan lines are analyzed for
obstacles as small as curbs.
Perhaps one of the most innovative elements of autonomous driving pertains to the fusion of multiple sensors.
Stanley, in particular, is equipped with laser sensors whose
range only extends to approximately 26 m. At this range, it is
impossible to see obstacles in time to avoid them.
Adaptive vision addresses this problem. 3 Figure 5 depicts
camera images segmented into drivable and undrivable terrain. This segmentation relies on the laser data. The adaptive vision software extracts a small drivable area right in
front of the robot, using the laser obstacle map. This area is
then used to train the computer vision system, to recognize
similar color and texture distributions anywhere in the
figure 4. Junior analyzes 3-D scans acquired through laser range
finer with 64 scan lines. Shown here is a single laser scan, along
with the corresponding camera view of the vehicle.
figure 5. Camera image analysis is based on adaptive vision, which
leveraged short-range laser data to train the system to recognize
similar-looking terrain at greater distance.
figure 3. Stanley integrates data from multiple lasers over time.
the resulting point cloud analyzed for vertical obstacles, which are
image. The adaptation is performed ten times a second,
so that the robot continuously adapt to the present terrain
conditions. Adaptive vision enhances the obstacle detection range by up to 200 m, and it was essential in Stanley’s
ability to travel safely at speeds of up to almost 40 mph.
3. 4. Localization
In both challenges, DARPA supplied contestants with maps
of the environment. Figure 6 shows the Urban Challenge
map. The maps contained detailed information about the
drivable road area, plus data on speed limits and intersection handling.
Localization addresses the problem of establishing correspondence between the robot’s present location, and
the map. At first glance, the INS system may appear sufficient to do so; however, the typical INS estimation error
can be a meter or more, which exceeds the acceptable error
in most cases. Consequently, both robots relate features