Saliency Heat-Map as Visual Attention for Autonomous Driving Using Generative Adversarial Network (GAN)
Affiliation auteurs | !!!! Error affiliation !!!! |
Titre | Saliency Heat-Map as Visual Attention for Autonomous Driving Using Generative Adversarial Network (GAN) |
Type de publication | Journal Article |
Year of Publication | Submitted |
Auteurs | Lateef F, Kas M, Ruichek Y |
Journal | IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS |
Type of Article | Article; Early Access |
ISSN | 1524-9050 |
Mots-clés | autonomous driving, autonomous vehicles, Computational modeling, generative adversarial network, Generative Adversarial Networks, Predictive models, Saliency detection, scene understanding, Vehicles, Visual saliency, visualization |
Résumé | The ability to sense and understanding the driving environment is a key technology for ADAS and autonomous driving. Human drivers have to pay more visual attention to important or target elements and ignore unnecessary ones present in their field of sight. A model that computes this visual attention of targets in a specific driving environment is essential and useful in supporting autonomous driving, object-specific tracking & detection, driving training, car collision warning, traffic sign detection, etc. In this paper, we propose a new framework of visual attention that can predict important objects in the driving scene using a conditional generative adversarial network. A large scale Visual Attention Driving Database (VADD) of saliency heat-maps is built from existing driving datasets using a saliency mechanism. The proposed framework model takes its strength from these saliency heat-maps as conditioning label variables. The results show that the proposed approach makes us able to predict heat-maps of most important objects in a driving environment. |
DOI | 10.1109/TITS.2021.3053178 |