Enable AI models to run on the edge
Reference number | |
Coordinator | Volvo Technology AB |
Funding from Vinnova | SEK 361 711 |
Project duration | June 2023 - May 2024 |
Status | Completed |
Venture | Accelerate Swedish partnership |
Important results from the project
Summary: The project optimized two of Volvo´s AI use cases for current and future hardware platforms using Embedl´s methods, focusing on deep neural networks for computer vision and signal processing as well as analysis of future hardware. Results: The target use cases achieved low inference latency and satisfactory DNN performance on selected HW targets. We established a process for evaluating next-generation hardware for DNN inference.
Expected long term effects
Project participants now have a clearer understanding of deep neural network optimization techniques and their application for in-vehicle DNN model inference. This knowledge provides a solid foundation for effectively utilizing this technology in future projects.
Approach and implementation
The purpose of the project was to optimize two of Volvo’s AI-powered use cases to run on Volvo’s existing as well as future hardware platforms. Using the Embedl SDK, target use cases reach low inference latency on selected HW targets. We established a process for evaluating next generation of hardware for DNN inference. Volvo and Embedl met once every two weeks to steer the project. Embedl and Volvo engineers worked together to implement solutions.