File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Cross Validation for CNN based Affordance Learning and Control for Autonomous Driving

TitleCross Validation for CNN based Affordance Learning and Control for Autonomous Driving
Authors
Issue Date2019
Citation
2019 IEEE Intelligent Transportation Systems Conference, ITSC 2019, 2019, p. 1519-1524 How to Cite?
AbstractAutonomous driving has attracted a significant amount of research efforts over the last few decades owing to the exponential growth of computational power and reduced cost of sensors. As a safety-sensitive task, autonomous driving needs a detailed level of scene understanding of decision making, planning, and control. This paper investigates the Convolutional Neural Network (CNN) based methods for affordance learning in driving scene understanding. Various perception models are built and evaluated for driving scene affordance learning in both the virtual environment and real sampled data. We also propose a conditional control model that maps the extracted coarse set of driving affordances to low-level control condition on the given driving priors. The performance, merits of the CNN based perception models, and the control model are analyzed and cross-validated on both virtual and real data.
Persistent Identifierhttp://hdl.handle.net/10722/352973
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorSun, Chen-
dc.contributor.authorSu, Lang-
dc.contributor.authorGu, Sunsheng-
dc.contributor.authorUwabeza Vianney, Jean M.-
dc.contributor.authorQin, Kongjian-
dc.contributor.authorCao, Dongpu-
dc.date.accessioned2025-01-13T03:01:24Z-
dc.date.available2025-01-13T03:01:24Z-
dc.date.issued2019-
dc.identifier.citation2019 IEEE Intelligent Transportation Systems Conference, ITSC 2019, 2019, p. 1519-1524-
dc.identifier.urihttp://hdl.handle.net/10722/352973-
dc.description.abstractAutonomous driving has attracted a significant amount of research efforts over the last few decades owing to the exponential growth of computational power and reduced cost of sensors. As a safety-sensitive task, autonomous driving needs a detailed level of scene understanding of decision making, planning, and control. This paper investigates the Convolutional Neural Network (CNN) based methods for affordance learning in driving scene understanding. Various perception models are built and evaluated for driving scene affordance learning in both the virtual environment and real sampled data. We also propose a conditional control model that maps the extracted coarse set of driving affordances to low-level control condition on the given driving priors. The performance, merits of the CNN based perception models, and the control model are analyzed and cross-validated on both virtual and real data.-
dc.languageeng-
dc.relation.ispartof2019 IEEE Intelligent Transportation Systems Conference, ITSC 2019-
dc.titleCross Validation for CNN based Affordance Learning and Control for Autonomous Driving-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ITSC.2019.8917385-
dc.identifier.scopuseid_2-s2.0-85076802637-
dc.identifier.spage1519-
dc.identifier.epage1524-
dc.identifier.isiWOS:000521238101089-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats