File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Spatial as deep: Spatial CNN for traffic scene understanding

TitleSpatial as deep: Spatial CNN for traffic scene understanding
Authors
Issue Date2018
PublisherAssociation for the Advancement of Artificial Intelligence. The conference proceedings' web site is located at https://www.aaai.org/ocs/index.php/AAAI/AAAI18/index
Citation
32nd AAAI Conference on Artificial Intelligence, AAAI 2018, 2018, p. 7276-7283 How to Cite?
AbstractCopyright © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Convolutional neural networks (CNNs) are usually built by stacking convolutional operations layer-by-layer. Although CNN has shown strong capability to extract semantics from raw pixels, its capacity to capture spatial relationships of pixels across rows and columns of an image is not fully explored. These relationships are important to learn semantic objects with strong shape priors but weak appearance coher-ences, such as traffic lanes, which are often occluded or not even painted on the road surface as shown in Fig. 1 (a). In this paper, we propose Spatial CNN (SCNN), which generalizes traditional deep layer-by-layer convolutions to slice-by-slice convolutions within feature maps, thus enabling message passings between pixels across rows and columns in a layer. Such SCNN is particular suitable for long continuous shape structure or large objects, with strong spatial relationship but less appearance clues, such as traffic lanes, poles, and wall. We apply SCNN on a newly released very challenging traffic lane detection dataset and Cityscapse dataset 1 . The results show that SCNN could learn the spatial relationship for structure output and significantly improves the performance. We show that SCNN outperforms the recurrent neural network (RNN) based ReNet and MRF+CNN (MRFNet) in the lane detection dataset by 8.7% and 4.6% respectively. Moreover, our SCNN won the 1st place on the TuSimple Benchmark Lane Detection Challenge, with an accuracy of 96.53%.
Persistent Identifierhttp://hdl.handle.net/10722/273646

 

DC FieldValueLanguage
dc.contributor.authorPan, Xingang-
dc.contributor.authorShi, Jianping-
dc.contributor.authorLuo, Ping-
dc.contributor.authorWang, Xiaogang-
dc.contributor.authorTang, Xiaoou-
dc.date.accessioned2019-08-12T09:56:15Z-
dc.date.available2019-08-12T09:56:15Z-
dc.date.issued2018-
dc.identifier.citation32nd AAAI Conference on Artificial Intelligence, AAAI 2018, 2018, p. 7276-7283-
dc.identifier.urihttp://hdl.handle.net/10722/273646-
dc.description.abstractCopyright © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Convolutional neural networks (CNNs) are usually built by stacking convolutional operations layer-by-layer. Although CNN has shown strong capability to extract semantics from raw pixels, its capacity to capture spatial relationships of pixels across rows and columns of an image is not fully explored. These relationships are important to learn semantic objects with strong shape priors but weak appearance coher-ences, such as traffic lanes, which are often occluded or not even painted on the road surface as shown in Fig. 1 (a). In this paper, we propose Spatial CNN (SCNN), which generalizes traditional deep layer-by-layer convolutions to slice-by-slice convolutions within feature maps, thus enabling message passings between pixels across rows and columns in a layer. Such SCNN is particular suitable for long continuous shape structure or large objects, with strong spatial relationship but less appearance clues, such as traffic lanes, poles, and wall. We apply SCNN on a newly released very challenging traffic lane detection dataset and Cityscapse dataset 1 . The results show that SCNN could learn the spatial relationship for structure output and significantly improves the performance. We show that SCNN outperforms the recurrent neural network (RNN) based ReNet and MRF+CNN (MRFNet) in the lane detection dataset by 8.7% and 4.6% respectively. Moreover, our SCNN won the 1st place on the TuSimple Benchmark Lane Detection Challenge, with an accuracy of 96.53%.-
dc.languageeng-
dc.publisherAssociation for the Advancement of Artificial Intelligence. The conference proceedings' web site is located at https://www.aaai.org/ocs/index.php/AAAI/AAAI18/index-
dc.relation.ispartof32nd AAAI Conference on Artificial Intelligence, AAAI 2018-
dc.titleSpatial as deep: Spatial CNN for traffic scene understanding-
dc.typeConference_Paper-
dc.description.naturelink_to_OA_fulltext-
dc.identifier.scopuseid_2-s2.0-85056513388-
dc.identifier.spage7276-
dc.identifier.epage7283-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats