Introduction to Sensor Fusion and Scene Understanding in AD/ADAS
Sensor fusion and scene understanding for autonomous driving
Course Overview
This course introduces the fundamental concepts, architectures, and practical challenges of sensor fusion and scene understanding for autonomous driving and ADAS systems. It is delivered as a lectio magistralis at graduate engineering schools.
Lecture agenda:
- CONTEXT — ADAS/AD Functional Structure
- Perception — Sensors and detection pipelines (camera, LiDAR, radar, ultrasonic)
- Data Fusion — Probabilistic frameworks, Kalman filters, multi-sensor architectures
- High-level fusion example: Objects — Multi-object tracking and data association
- High-level fusion example: Road — Road scene representation and fusion
- Scene Understanding — Semantic perception, environment modelling
Theory — Lecture Slides
Slides are coming soon and will be available for download on this page.
Practice — Python Lab
The hands-on lab is a Jupyter notebook guiding students through key sensor fusion algorithms implemented in Python.
Lab exercises include:
- Kalman filter — implementation for linear state estimation and tracking
- Sensor fusion — combining measurements from multiple sensors (camera, LiDAR, radar)
- Path prediction & collision detection — predicting future trajectories and detecting potential collisions
This is a preview version of the notebook. Exercises contain skeleton code and
Coming soon placeholders — full solutions will be released alongside the lecture slides.