ditional points into the three-dimensional space outside the ROI of t; we
determined the value of the z coordinate of each point by choosing a random value between the minimum and
maximum z-coordinate values of all
points in t. Using a similar approach,
we also generated a d value—the reflected intensity of the laser—for each
added point. We then ran the LOP software for t′, producing O′, the set of detected obstacles. Finally, we compared
O and O′. We conducted three series of
experiments: for n = 10, 100, and 1,000.
We thus ran the LOP software for a total of ( 1,000 + 1,000) × 3 = 6,000 times,
processing 3,000 source test cases and
3,000 follow-up test cases.
Test results. In our experiments, for
ease of implementation of MR1, we did
not check the subset relation O ⊆ O′
but instead compared the numbers of
objects contained in O and O′, denoted
by |O| and |O′|, respectively. Note that
O ⊆ O′ → |O|≤ |O′|; hence, the condition we actually checked was less strict
than MR1. That is, if |O| > |O′|, then
there must be something wrong, as
one or more objects in O must be missing in O′.
The results of our experiments were
quite surprising; the table here summarizes the overall results. The violation rates (that is, cases for |O| > |O′|
out of 1,000 pairs of outputs) were 2.7%
(= 27 ÷ 1,000), 12.1% (=121÷ 1,000), and
33.5% (= 335 ÷ 1,000), for n = 10, 100,
and 1,000, respectively. This means as
few as 10 sheer random points scattered in the vast three-dimensional
space outside the ROI could cause the
driverless car to fail to detect an obstacle on the roadway, with 2.7% probability. When the number of random
points increased to 1,000, the probability became as high as 33.5%. According
to the HDL64E user manual, the LiDAR
sensor generates more than one million data points per second, and each
frame of point cloud data used in our
experiments normally contained more
than 100,000 data points. The random
points we added to the point cloud
frames were thus trivial.
The LOP software in our experi-
ments categorized the detected ob-
stacles into four types: detected car,
pedestrian, cyclist, and unknown, as
“depicted by bounding boxes in green,
pink, blue and purple respectively”
morphic relation, whereby the soft-
ware under test is the LiDAR obstacle
perception (LOP) subsystem of Apollo,
A and A′ represent two inputs to LOP,
and O and O′ represent LOP’s outputs
for A and A′, respectively.
MR1. Let A and A′ be two frames of
three-dimensional point cloud data
that are identical except that A′
includes a small number of additional
LiDAR data points randomly scattered
in regions outside the ROI. Also let O
and O′ be the sets of obstacles identified by LOP for A and A′, respectively
(LOP identifies only obstacles within
the ROI). The following relation must
then hold: O ⊆ O′.
In MR1, the additional LiDAR data
points in A′ could represent small particles in the air or just some noise from
the sensor, whose existence is possible.
23 MR1 says the existence of some
particles, or some noise points, or their
combination, in the air far from the
ROI should not cause an obstacle on
the roadway to become undetectable.
As an extreme example, a small insect
100 meters away—outside the ROI—
should not interfere with the detection
of a pedestrian in front of the vehicle.
This requirement is intuitively valid
and agrees with the Baidu specification
of its HDMap ROI filter. According to
the user manual for the HDL64E LiDAR
sensor, it can be mounted atop the vehicle, delivering a 360° horizontal field
of view and a 26. 8° vertical field of view,
capturing a point cloud with a range up
to 120 meters.
We next describe the design of three
series of experiments to test the LOP
using MR1. The Apollo Data Open Platform ( http://data.apollo.auto) provides
a set of “vehicle system demo data”—
sensor data collected at real scenes. We
downloaded the main file of this dataset, named demo-sensor-demo-apollo-
1.5.bag ( 8.93GB). This file included
point cloud data collected by Baidu
engineers using the Velodyne LiDAR
sensor on the morning of September 6,
2017. In each series of experiments, we
first randomly extracted 1,000 frames
of the point cloud data; we call each
such frame a “source test case.” For
each source test case t, we ran the LOP
software to identify its ROI and generate O, the set of detected obstacles for
t. We then constructed a follow-up test
case t′ by randomly scattering n ad-
As an extreme
example, a small
insect 100 meters
away — outside
the ROI —should
not interfere with
the detection
of a pedestrian in
front of the vehicle.