File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: AMIR: Active Multimodal Interaction Recognition from Video and Network Traffic in Connected Environments

TitleAMIR: Active Multimodal Interaction Recognition from Video and Network Traffic in Connected Environments
Authors
Keywordsactivity recognition
datasets
multimodal learning
Issue Date2023
Citation
Proceedings of the ACM on Interactive Mobile Wearable and Ubiquitous Technologies, 2023, v. 7, n. 1, article no. 21 How to Cite?
AbstractActivity recognition using video data is widely adopted for elder care, monitoring for safety and security, and home automation. Unfortunately, using video data as the basis for activity recognition can be brittle, since models trained on video are often not robust to certain environmental changes, such as camera angle and lighting changes. There has been a proliferation of network-connected devices in home environments. Interactions with these smart devices are associated with network activity, making network data a potential source for recognizing these device interactions. This paper advocates for the synthesis of video and network data for robust interaction recognition in connected environments. We consider machine learning-based approaches for activity recognition, where each labeled activity is associated with both a video capture and an accompanying network traffic trace. We develop a simple but effective framework AMIR (Active Multimodal Interaction Recognition)1 that trains independent models for video and network activity recognition respectively, and subsequently combines the predictions from these models using a meta-learning framework. Whether in lab or at home, this approach reduces the amount of "paired"demonstrations needed to perform accurate activity recognition, where both network and video data are collected simultaneously. Specifically, the method we have developed requires up to 70.83% fewer samples to achieve 85% F1 score than random data collection, and improves accuracy by 17.76% given the same number of samples.
Persistent Identifierhttp://hdl.handle.net/10722/363521

 

DC FieldValueLanguage
dc.contributor.authorLiu, Shinan-
dc.contributor.authorMangla, Tarun-
dc.contributor.authorShaowang, Ted-
dc.contributor.authorZhao, Jinjin-
dc.contributor.authorPaparrizos, John-
dc.contributor.authorKrishnan, Sanjay-
dc.contributor.authorFeamster, Nick-
dc.date.accessioned2025-10-10T07:47:32Z-
dc.date.available2025-10-10T07:47:32Z-
dc.date.issued2023-
dc.identifier.citationProceedings of the ACM on Interactive Mobile Wearable and Ubiquitous Technologies, 2023, v. 7, n. 1, article no. 21-
dc.identifier.urihttp://hdl.handle.net/10722/363521-
dc.description.abstractActivity recognition using video data is widely adopted for elder care, monitoring for safety and security, and home automation. Unfortunately, using video data as the basis for activity recognition can be brittle, since models trained on video are often not robust to certain environmental changes, such as camera angle and lighting changes. There has been a proliferation of network-connected devices in home environments. Interactions with these smart devices are associated with network activity, making network data a potential source for recognizing these device interactions. This paper advocates for the synthesis of video and network data for robust interaction recognition in connected environments. We consider machine learning-based approaches for activity recognition, where each labeled activity is associated with both a video capture and an accompanying network traffic trace. We develop a simple but effective framework AMIR (Active Multimodal Interaction Recognition)1 that trains independent models for video and network activity recognition respectively, and subsequently combines the predictions from these models using a meta-learning framework. Whether in lab or at home, this approach reduces the amount of "paired"demonstrations needed to perform accurate activity recognition, where both network and video data are collected simultaneously. Specifically, the method we have developed requires up to 70.83% fewer samples to achieve 85% F1 score than random data collection, and improves accuracy by 17.76% given the same number of samples.-
dc.languageeng-
dc.relation.ispartofProceedings of the ACM on Interactive Mobile Wearable and Ubiquitous Technologies-
dc.subjectactivity recognition-
dc.subjectdatasets-
dc.subjectmultimodal learning-
dc.titleAMIR: Active Multimodal Interaction Recognition from Video and Network Traffic in Connected Environments-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1145/3580818-
dc.identifier.scopuseid_2-s2.0-85150703793-
dc.identifier.volume7-
dc.identifier.issue1-
dc.identifier.spagearticle no. 21-
dc.identifier.epagearticle no. 21-
dc.identifier.eissn2474-9567-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats