Introduction to Sensor Fusion and Scene Understanding in AD/ADAS

Sensor fusion and scene understanding for autonomous driving

Course Overview

This course introduces the fundamental concepts, architectures, and practical challenges of sensor fusion and scene understanding for autonomous driving and ADAS systems. It is delivered as a lectio magistralis at graduate engineering schools.

Lecture agenda:

  1. CONTEXT — ADAS/AD Functional Structure
  2. Perception — Sensors and detection pipelines (camera, LiDAR, radar, ultrasonic)
  3. Data Fusion — Probabilistic frameworks, Kalman filters, multi-sensor architectures
  4. High-level fusion example: Objects — Multi-object tracking and data association
  5. High-level fusion example: Road — Road scene representation and fusion
  6. Scene Understanding — Semantic perception, environment modelling

Theory — Lecture Slides

Slides are coming soon and will be available for download on this page.

Practice — Python Lab

The hands-on lab is a Jupyter notebook guiding students through key sensor fusion algorithms implemented in Python.

Lab exercises include:

  • Kalman filter — implementation for linear state estimation and tracking
  • Sensor fusion — combining measurements from multiple sensors (camera, LiDAR, radar)
  • Path prediction & collision detection — predicting future trajectories and detecting potential collisions
This is a preview version of the notebook. Exercises contain skeleton code and Coming soon placeholders — full solutions will be released alongside the lecture slides.
Federico Camarda
Federico Camarda
AD/ADAS Sofware Engineer | PhD in Automation and Robotics

My work and research aim is to make vehicles smarter and safer.