16 February 2023 Multiscale feature fusion method for lane line detection based on time series
Chao Fan, Yingying Qiu, Fangfang Chen, Hao Lin, Litao Yang
Author Affiliations +
Abstract

In order to solve lane line detection in difficult traffic conditions, such as shadow occlusion, signpost degradation, curves, and tunnels, numerous models have been proposed. However, most of the existing models conduct independent single-frame image detection, which makes it difficult to utilize the continuity of driving images and is ineffective in challenging scenes. To this end, we suggest a spatiotemporal information processing model for lane line recognition that enhances critical features. In order to properly learn the correlation between continuous images, we first employ a convolutional gated recurrent unit to process spatiotemporal driving information on the basis of U-Net. Second, the pyramid split attention (PSA) module is used to enhance or suppress the obtained feature expressions. Finally, the skip connection is used to fuse the features of different scales encoded by each stage with the features processed by PSA and gradually restore to the original image size. Experiments on the TuSimple dataset demonstrate that our model outperforms representative lane line detection networks in challenging driving scenes, with an F1-measure of up to 94.302%.

© 2023 SPIE and IS&T
Chao Fan, Yingying Qiu, Fangfang Chen, Hao Lin, and Litao Yang "Multiscale feature fusion method for lane line detection based on time series," Journal of Electronic Imaging 32(1), 013035 (16 February 2023). https://doi.org/10.1117/1.JEI.32.1.013035
Received: 28 July 2022; Accepted: 31 January 2023; Published: 16 February 2023
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Roads

Feature fusion

Feature extraction

Shadows

Convolution

Image fusion

Visualization

Back to Top