File Download

There are no files associated with this item.

Supplementary

Conference Paper: Dynamic and Static Context-aware LSTM for Multi-agent Motion Prediction

TitleDynamic and Static Context-aware LSTM for Multi-agent Motion Prediction
Authors
KeywordsMotion prediction
Trajectory Forecasting
Social model
Issue Date2020
Citation
The 16th European Conference on Computer Vision (ECCV), Online, 23-28 August 2020 How to Cite?
AbstractMulti-agent motion prediction is challenging because it aims to foresee the future trajectories of multiple agents ( g pedestrians) simultaneously in a complicated scene. Existing work addressed this challenge by either learning social spatial interactions represented by the positions of a group of pedestrians, while ignoring their temporal coherence (ie dependencies between different long trajectories), or by understanding the complicated scene layout ( g scene segmentation) to ensure safe navigation. However, unlike previous work that isolated the spatial interaction, temporal coherence, and scene layout, this paper designs a new mechanism, ie, Dynamic and Static Context-aware Motion Predictor (DSCMP), to integrates these rich information into the long-short-term-memory (LSTM). It has three appealing benefits. (1) DSCMP models the dynamic interactions between agents by learning both their spatial positions and temporal coherence, as well as understanding the contextual scene layout. (2) Different from previous LSTM models that predict motions by propagating hidden features frame by frame, limiting the capacity to learn correlations between long trajectories, we carefully design a differentiable queue mechanism in DSCMP, which is able to explicitly memorize and learn the correlations between long trajectories. (3) DSCMP captures the context of scene by inferring latent variable, which enables multimodal predictions with meaningful semantic scene layout. Extensive experiments show that DSCMP outperforms state-of-the-art methods by large margins, such as 9.05\% and 7.62\% relative improvements on the ETH-UCY and SDD datasets respectively.
DescriptionPoster Presentation - Paper ID: 3801
ECCV 2020 take place virtually due to COVID-19
Persistent Identifierhttp://hdl.handle.net/10722/284151

 

DC FieldValueLanguage
dc.contributor.authorTao, C-
dc.contributor.authorJiang, Q-
dc.contributor.authorDuan, L-
dc.contributor.authorLuo, P-
dc.date.accessioned2020-07-20T05:56:29Z-
dc.date.available2020-07-20T05:56:29Z-
dc.date.issued2020-
dc.identifier.citationThe 16th European Conference on Computer Vision (ECCV), Online, 23-28 August 2020-
dc.identifier.urihttp://hdl.handle.net/10722/284151-
dc.descriptionPoster Presentation - Paper ID: 3801-
dc.descriptionECCV 2020 take place virtually due to COVID-19-
dc.description.abstractMulti-agent motion prediction is challenging because it aims to foresee the future trajectories of multiple agents ( g pedestrians) simultaneously in a complicated scene. Existing work addressed this challenge by either learning social spatial interactions represented by the positions of a group of pedestrians, while ignoring their temporal coherence (ie dependencies between different long trajectories), or by understanding the complicated scene layout ( g scene segmentation) to ensure safe navigation. However, unlike previous work that isolated the spatial interaction, temporal coherence, and scene layout, this paper designs a new mechanism, ie, Dynamic and Static Context-aware Motion Predictor (DSCMP), to integrates these rich information into the long-short-term-memory (LSTM). It has three appealing benefits. (1) DSCMP models the dynamic interactions between agents by learning both their spatial positions and temporal coherence, as well as understanding the contextual scene layout. (2) Different from previous LSTM models that predict motions by propagating hidden features frame by frame, limiting the capacity to learn correlations between long trajectories, we carefully design a differentiable queue mechanism in DSCMP, which is able to explicitly memorize and learn the correlations between long trajectories. (3) DSCMP captures the context of scene by inferring latent variable, which enables multimodal predictions with meaningful semantic scene layout. Extensive experiments show that DSCMP outperforms state-of-the-art methods by large margins, such as 9.05\% and 7.62\% relative improvements on the ETH-UCY and SDD datasets respectively.-
dc.languageeng-
dc.relation.ispartofEuropean Conference on Computer Vision (ECCV)-
dc.subjectMotion prediction-
dc.subjectTrajectory Forecasting-
dc.subjectSocial model-
dc.titleDynamic and Static Context-aware LSTM for Multi-agent Motion Prediction-
dc.typeConference_Paper-
dc.identifier.emailLuo, P: pluo@hku.hk-
dc.identifier.authorityLuo, P=rp02575-
dc.identifier.hkuros311010-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats