MinBox builder (tested in our experiments). This object builder component
establishes a bounding box for the detected obstacles;
HM object tracker (not tested in our
experiments). This tracker is designed
to track obstacles detected in the segmentation step; and
Sequential type fusion (not tested in
our experiments). To smooth the obstacle type and reduce the type switch
over the entire trajectory, Apollo uses a
sequential type fusion algorithm.
Our software-testing experiments
involved the first, second, and third
but not the fourth and fifth features,
because the first three are the most
critical and fundamental.
Our testing method: MT in com-
bination with fuzzing. Based on the
Baidu specification of the HDMap ROI
filter, we identified the following meta-
only the first, LiDAR obstacle percep-
tion (or LOP), which takes the three-
dimensional point cloud data as input,
as generated by Velodyne’s HDL64E
LOP resolves the raw point-cloud
data using the following pipeline, as
excerpted from the Apollo website
HDMap region of interest filter (
tested in our experiments). The region of
interest (ROI) specifies the drivable
area, including road surfaces and
junctions that are retrieved from
a high-resolution (HD) map. The
HDMap ROI filter processes LiDAR
points that are outside the ROI, removing background objects (such as
buildings and trees along the road).
What remains is the point cloud in
the ROI for subsequent processing;
Convolutional neural networks segmentation (tested in our experiments).
After identifying the surrounding environment using the HDMap ROI filter, the Apollo software obtains the
filtered point cloud that includes only
the points inside the ROI—the drivable
road and junction areas. Most of the
background obstacles (such as buildings and trees along the road) have
been removed, and the point cloud inside the ROI is fed into the “
segmentation” module. This process detects and
segments out foreground obstacles
(such as cars, trucks, bicycles, and pedestrians). Apollo uses a deep convolutional neural network (CNN) for
accurate obstacle detection and segmentation. The output of this process
is a set of objects corresponding to obstacles in the ROI;
Figure 4. MT detected real-life fatal errors in LiDAR point-cloud data interpretation in the Apollo “perception” module: three missing cars
and one missing pedestrian.
(a) Original: 101,676 LiDAR data points; the green
boxes were generated by the Apollo system to
represent the detected cars.
(c) Original: 104,251 LiDAR data points; the small
pink mark was generated by the Apollo system to
represent a detected pedestrian.
(b) After adding 1,000 random data points outside
the ROI, the three cars inside the ROI could no
longer be detected.
(d) After adding only 10 random data points outside the ROI, the pedestrian inside the ROI could
no longer be detected.