Needs Tesla LiDAR

It doesn't work without LiDAR!

Autonomous driving based on a sensor network

The most important premise in road traffic is that road users do not collide. Safety is the top priority. In order to enable safe locomotion in the complex 3D world, sufficient distance must always be kept to the surrounding objects and vehicles. In manually controlled vehicles, the driver is responsible for ensuring that these distances are maintained and that collisions with other road users are avoided. In the higher automation levels, the vehicle itself will be responsible for capturing its surroundings. Environment detection is therefore a critical and essential part of automated driving. For this purpose, sensor networks consisting of ultrasound, camera, radar and LiDAR are integrated into the vehicles.

But why are so many different sensor systems needed for automated driving?

Is LiDAR a "crutch"?

If Tesla boss Elon Musk has his way, at least one sensor technology is not necessary to drive autonomously. At an investor conference in 2019, he explained that LiDAR sensors are a wrong path and cameras with capable algorithms in combination with radar are sufficient for automated driving functions. His argument that LiDAR sensors are too expensive and large to be integrated into current production vehicles may well be so far correct - this is exactly why Blickfeld technology was developed - but relying only on cameras and radar is not a safe way today.

A recent incident on a highway in Taiwan shows why: A truck overturned on a highway blocking the highway lanes. The top of the white tarpaulin pointed in the direction of the following traffic. A Tesla crashed into the truck unbraked. Fortunately, he had no cargo loaded, so no one was injured. How did this accident come about? Since the vehicle did not slow down when it approached the obstacle, it can be assumed that the so-called autopilot was switched on. A human driver would probably have reacted at least shortly before the impact. Tesla's autopilot is based on a sensor suite without LiDARs, but instead builds on cameras, supported by radar and ultrasound. The image recognition software, which evaluates the recorded camera data and thus provides the basis for driving decisions, did not know what to do with the unknown situation of the overturned truck and did not even detect an object in its own lane. The camera system recognized the tarpaulin incorrectly and did not interpret the white surface as an obstacle.

Cameras - the eyes of the cars?

Cameras are similar to our human eyes - they capture images as we see them, i.e. in color. What camera recordings lack, however, is the third dimension that is necessary to measure distances. However, this ability is essential when it comes to dodging objects. The human brain interprets the recorded 2D information to estimate distances. Cameras need image recognition software for this.

The problem with image recognition: In order to be able to interpret images, algorithms have to be learned by labeling and storing experienced situations. This is achieved with the help of artificial intelligence, machine learning and thousands of test kilometers - real and in simulation. But what happens when the vehicle encounters an unknown situation? Covering this so-called “long tail”, i.e. taking on all those situations that are not part of everyday driving and can be described as exceptional, is a challenge that has not yet been resolved. As long as this exists, a camera cannot serve as the sole sensor system on which automated driving functions are based. The necessary interpretation of the camera data by algorithms creates space for errors and errors endanger the safety of road users.

LiDARs: leave no room for interpretation

Sensor technologies such as LiDAR offer no room for interpretation in the question of if an object is on the road by emitting laser beams that bounce off surrounding objects and are then picked up again by the sensor. You record 3D data right from the start and thus skip the intermediate step of converting 2D to 3D. If there is an obstacle in front of the vehicle on the road, LiDAR sensors detect this early and reliably and detect the exact dimensions and, above all, the distance to the vehicle.

Classifying objects

Now, however, it can also be decisive What for an object is in the lane of the vehicle. Because not every object is an obstacle that should cause braking. The various sensor technologies classify objects in different ways: LiDAR sensors, for example, identify point clusters in the sensor data. Based on the size of these clusters, objects can be divided into categories such as cars, motorcycles or pedestrians. In order to identify, for example, a blown plastic bag as such and therefore harmless, the evaluation of the camera data is again helpful, which, as already described, makes use of image recognition software. Cameras are also required to recognize street signs, for example, since LiDAR sensors do not record colors.

LiDAR sensors for more security

Thus, every sensor technology has its advantages and disadvantages and its raison d'être. Rather, it is clear that the redundancies in a sensor network are necessary to ensure the safety of vehicles with automated driving functions. None of the sensor technologies will enable autonomous driving on their own. Incidents such as the above-mentioned accident in Taiwan also clearly show that LiDAR sensors cannot be dispensed with in the sensor networks. Because automated vehicles must first and foremost be one thing: Safe. With LiDAR sensors, autonomous vehicles are taking a big step closer to this goal.