File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Can we trust our eyes? Interpreting the misperception of road safety from street view images and deep learning

TitleCan we trust our eyes? Interpreting the misperception of road safety from street view images and deep learning
Authors
KeywordsBuilt environment
Deep learning
Human perception
Road safety
Street view images
Issue Date1-Mar-2024
PublisherElsevier
Citation
Accident Analysis & Prevention, 2024, v. 197 How to Cite?
AbstractRoad safety is a critical concern that impacts both human lives and urban development, drawing significant attention from city managers and researchers. The perception of road safety has gained increasing research interest due to its close connection with the behavior of road users. However, safety isn't always as it appears, and there is a scarcity of studies examining the association and mismatch between road traffic safety and road safety perceptions at the city scale, primarily due to the time-consuming nature of data acquisition. In this study, we applied an advanced deep learning model and street view images to predict and map human perception scores of road safety in Manhattan. We then explored the association and mismatch between these perception scores and traffic crash rates, while also interpreting the influence of the built environment on this disparity. The results showed that there was heterogeneity in the distribution of road safety perception scores. Furthermore, the study found a positive correlation between perception scores and crash rates, indicating that higher perception scores were associated with higher crash rates. In this study, we also concluded four perception patterns: “Safer than it looks”, “Safe as it looks”, “More dangerous than it looks”, and “Dangerous as it looks”. Wall view index, tree view index, building view index, distance to the nearest traffic signals, and street width were found to significantly influence these perception patterns. Notably, our findings underscored the crucial role of traffic lights in the “More dangerous than it looks” pattern. While traffic lights may enhance people's perception of safety, areas in close proximity to traffic lights were identified as potentially accident-prone regions.
Persistent Identifierhttp://hdl.handle.net/10722/345866
ISSN
2023 Impact Factor: 5.7
2023 SCImago Journal Rankings: 1.897

 

DC FieldValueLanguage
dc.contributor.authorYu, Xujing-
dc.contributor.authorMa, Jun-
dc.contributor.authorTang, Yihong-
dc.contributor.authorYang, Tianren-
dc.contributor.authorJiang, Feifeng-
dc.date.accessioned2024-09-04T07:06:03Z-
dc.date.available2024-09-04T07:06:03Z-
dc.date.issued2024-03-01-
dc.identifier.citationAccident Analysis & Prevention, 2024, v. 197-
dc.identifier.issn0001-4575-
dc.identifier.urihttp://hdl.handle.net/10722/345866-
dc.description.abstractRoad safety is a critical concern that impacts both human lives and urban development, drawing significant attention from city managers and researchers. The perception of road safety has gained increasing research interest due to its close connection with the behavior of road users. However, safety isn't always as it appears, and there is a scarcity of studies examining the association and mismatch between road traffic safety and road safety perceptions at the city scale, primarily due to the time-consuming nature of data acquisition. In this study, we applied an advanced deep learning model and street view images to predict and map human perception scores of road safety in Manhattan. We then explored the association and mismatch between these perception scores and traffic crash rates, while also interpreting the influence of the built environment on this disparity. The results showed that there was heterogeneity in the distribution of road safety perception scores. Furthermore, the study found a positive correlation between perception scores and crash rates, indicating that higher perception scores were associated with higher crash rates. In this study, we also concluded four perception patterns: “Safer than it looks”, “Safe as it looks”, “More dangerous than it looks”, and “Dangerous as it looks”. Wall view index, tree view index, building view index, distance to the nearest traffic signals, and street width were found to significantly influence these perception patterns. Notably, our findings underscored the crucial role of traffic lights in the “More dangerous than it looks” pattern. While traffic lights may enhance people's perception of safety, areas in close proximity to traffic lights were identified as potentially accident-prone regions.-
dc.languageeng-
dc.publisherElsevier-
dc.relation.ispartofAccident Analysis & Prevention-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectBuilt environment-
dc.subjectDeep learning-
dc.subjectHuman perception-
dc.subjectRoad safety-
dc.subjectStreet view images-
dc.titleCan we trust our eyes? Interpreting the misperception of road safety from street view images and deep learning-
dc.typeArticle-
dc.identifier.doi10.1016/j.aap.2023.107455-
dc.identifier.pmid38218132-
dc.identifier.scopuseid_2-s2.0-85182504954-
dc.identifier.volume197-
dc.identifier.eissn1879-2057-
dc.identifier.issnl0001-4575-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats