Your browser doesn't support javascript. This means that the content or functionality of our website will be limited or unavailable. If you need more information about Vinnova, please contact us.

Enable AI models to run on the edge

Reference number
Coordinator Volvo Technology AB
Funding from Vinnova SEK 361 711
Project duration June 2023 - May 2024
Status Completed
Venture Accelerate Swedish partnership

Important results from the project

Summary: The project optimized two of Volvo´s AI use cases for current and future hardware platforms using Embedl´s methods, focusing on deep neural networks for computer vision and signal processing as well as analysis of future hardware. Results: The target use cases achieved low inference latency and satisfactory DNN performance on selected HW targets. We established a process for evaluating next-generation hardware for DNN inference.

Expected long term effects

Project participants now have a clearer understanding of deep neural network optimization techniques and their application for in-vehicle DNN model inference. This knowledge provides a solid foundation for effectively utilizing this technology in future projects.

Approach and implementation

The purpose of the project was to optimize two of Volvo’s AI-powered use cases to run on Volvo’s existing as well as future hardware platforms. Using the Embedl SDK, target use cases reach low inference latency on selected HW targets. We established a process for evaluating next generation of hardware for DNN inference. Volvo and Embedl met once every two weeks to steer the project. Embedl and Volvo engineers worked together to implement solutions.

The project description has been provided by the project members themselves and the text has not been looked at by our editors.

Last updated 20 February 2025

Reference number 2023-01059