Lane marking detection and classification using spatial-temporal feature pooling

Título da Revista

ISSN da Revista

Título de Volume

Editor

Universidade Federal do Espírito Santo

Resumo

The lane detection problem has been extensively researched in the past decades, especially since the advent of deep learning. Despite the numerous works proposing solutions to the localization task (i.e., localizing the lane boundaries in an input image), the classification task has not seen the same focus. Nonetheless, knowing the type of lane boundary, particularly that of the ego lane, can be very useful for many applications. For instance, a vehicle might not be allowed by law to overtake depending on the type of the ego lane. Beyond that, very few works take advantage of the temporal information available in the videos captured by the vehicles: most methods employ a single-frame approach. In this work, building upon the recent deep learning-based model LaneATT, we propose an approach to exploit the temporal information and integrate the classification task into the model. This is accomplished by extracting features from multiple frames using a deep neural network (instead of only one as in LaneATT). Our results show that the proposed modifications can improve the detection performance on the most recent benchmark (VIL-100) by 2.34%, establishing a new state-of-the-art. Finally, an extensive evaluation shows that it enables a high classification performance (89.37%) that serves as a future benchmark for the field.

Descrição

Palavras-chave

Veículos autônomos, Direção autônoma, Aprendizado profundo, Detecção de objetos em vídeos, Detecção de faixas de trânsito

Citação

Avaliação

Revisão

Suplementado Por

Referenciado Por