VLS: Vehicle Tail Light Signal Detection Benchmark
Abstract
Many car accidents are caused by the driver's failure to accurately and timely identify the driving state of the vehicle ahead. Therefore, it is very important to accurately and timely detect the driving state of the vehicle ahead in an automated manner. There are many factors that affect the recognition accuracy, such as the light condition, weather, and the angle of the tail lights of the vehicle. Few existing autonomous driving datasets can be used to train the deep learning models to accurately identify the driving state of the vehicle ahead and perfectly meet the needs above. The proposed VLS (vehicle tail light signal) Dataset consists of eight vehicle driving states namely normal driving, braking, left turn, and right turn during the day and night. The dataset could help us predict the future trajectory of the vehicle ahead and make appropriate decisions, by identifying the vehicle driving states in the real world scenarios based on the on-off states of the tail lights (on the left, right and top of the vehicle tail). The reasons why some hard samples are difficult to be detected are also analyzed. Six mainstream object detection algorithms are used to train and test our dataset with their detection accuracy. These algorithms are readily available to identify the vehicle driving states and achieves the best speed-Accuracy trade-off on our VLS dataset. The dataset is proved to be productive and useful to the development of autonomous driving systems.