haltakov.eth 🧱🔨
haltakov.eth 🧱🔨

@haltakov

15 Tweets 19 reads Apr 27, 2021
How Does a Self-Driving Car Work? 🔧 🧠 🚙
This is classical self-driving car software stack. Nowadays, all steps in the pipeline are dominated by machine learning.
Read below for details on each step 👇
Sensors 🎥
There are 3 main sensors for environment perception 360° around the car:
▪️ Cameras
▪️ Lidars
▪️ Radars
Each sensor has different advantages and disadvantages, so combining all 3 is the best strategy to achieve maximum robustness.
Perception 🖼️
The perception module processes all the sensor raw data to detect different objects, drivable space, lane boundaries, measure distance etc.
Fusing the information from different sensors usually increases the quality of the data significantly.
Localization 🗺️
Most self-driving cars use a HD map. The map provides information about the geometry of the road, traffic rules and position of interesting objects (e.g. traffic lights).
Localization in the map is done using GPS and landmarks detected by the sensors.
Prediction 🔮
The goal of this module is to predict the actions and trajectories of other traffic participants.
Will the car in front break?
Will the car cut in front of me?
Will the pedestrian cross the street?
This is crucial for the car to plan its own trajectory!
Planning 🔀
In this step the car plans its own trajectory and actions. It needs to consider all other traffic participants, their intentions and the surrounding infrastructure.
One of the main objectives when planning is to maximize safety, but also to drive naturally.
Control 🚙
The final step is controlling the throttle, breaks and steering so that the car actually drives the planned trajectory.
Here, it is important to have a smooth and natural control and not a steering wheel that twitches all the time.
If you liked this thread and want to read more about self-driving cars and machine learning give me a follow! 👍
I have many more threads like this planned 😃
@pythops @comma_ai They are also still missing some important features, like for example traffic lights control.
The truth is probably somewhere in between modular and end-to-end. I recommend this talk by Raquel Urtasun on that covers the topic quite well:
youtube.com
@mrsahoo You can then easily generate a lot of training data by taking the trajectory the car actually took and correcting for your own movement.
@donkosee What you need to do is use additional landmarks that you can detect in the world and have stored in the map to refine your position and work through GPS outages.
There are different approaches on what landmarks one could use.
@donkosee Waymo for example relies on point clouds from the lidar which are quite dense, but there are also solutions with sparser maps, that contain landmarks like traffic signs for example.
@dspkiran - Reprocess all sensors - use a new software version to process the raw data and generate the output again
- Reprocess all the function logic with a new software version or a bugfix
- Look at the behavior, compute KPIs, run automatic tests etc.
@dspkiran This can be done both in our data center or on a local developer machine if you want in depth debugging capabilities.
@pythops @comma_ai They are also at the car manufacturers' mercy to not cut the access to the signals they need on the OBD port.

Loading suggestions...