Your browser doesn't support javascript. This means that the content or functionality of our website will be limited or unavailable. If you need more information about Vinnova, please contact us.

Deep MultiModal Learning for Automotive Applications

Reference number
Coordinator ZENSEACT AB
Funding from Vinnova SEK 12 262 880
Project duration September 2023 - September 2027
Status Ongoing
Venture Safe automated driving – FFI
Call Traffic-safe automation - FFI - spring 2023

Purpose and goal

This project aims to create multimodal sensor fusion methods for advanced and robust automotive perception systems. The project will focus on three key areas: (1) Develop multimodal fusion architectures and representations for both dynamic and static objects. (2) Investigate self-supervised learning techniques for the multimodal data in an automotive setting. (3) Improve the perception system’s ability to robustly handle rare events, objects, and road users.

Expected effects and result

In this project we are focusing on techniques that can improve the accuracy and robustness of perception systems of Autonomous Drive (AD) and Advanced Driver Assistance Systems (ADAS). Therefore, we expect that our techniques contribute to enhanced safety of ADAS/AD equipped vehicles which in turn can accelerate the public adoption of AD systems. Through this increased public adoption, we hope to contribute to a considerably safer transportation for all road users.

Planned approach and implementation

This project is divided into four work packages, with the first one being dedicated to project management and dissemination. The other three each focus on a sub-problem within the multimodal learning topic. The research within these work packages would be carried out by three research teams, consisting of three Phd students and their supervisors from Chalmers and their respective company, that is, Zenseact and Volvo Cars. Each research team has their own focus and problem setting. All teams will meet every 3 months to exchange ideas and share their progress.

The project description has been provided by the project members themselves and the text has not been looked at by our editors.

Last updated 22 September 2023

Reference number 2023-00763