Publication
CAI 2024
Conference paper

On Efficient Object-Detection NAS for ADAS on Edge devices

View publication

Abstract

Object detection is a crucial building block for Advanced Driving Assistance Systems (ADAS). These systems require real-time accurate detection on resource-constrained edge devices. Deep learning models are emerging as popular techniques over traditional methods with superior performance. A hurdle in deploying these models is the inference time and computational cost of these models, in addition to training challenges for specialized tasks. We address this using supernet training-based neural architecture search (NAS) to obtain a variety of object detection models at a scale specific to the ADAS application. To this end, we consider a convolutional neural network-based object detection model. We produce a palette of CNN models using the CityScapes, and BDD10K datasets, catering to diverse parameters and accuracy tradeoffs. Our resulting models range between 1.8M to 2.6M parameters with an mAP score within in 29.7% to 33.60% on the CityScapes dataset, and 20.08% to 23.35% on BDD10K. Inspired by the popularity of Large Vision Models, we further develop cost-effective transformer-based ADAS Object Detection models. We obtain a palette of transformer models ranging from 69.1M to 113M parameters with mAP score within 28.58% and 32.43% on CityScapes and between 24.31% to 26.51% on the BDD10K dataset.

Date

Publication

CAI 2024

Authors

Topics

Share