File Download

There are no files associated with this item.

Supplementary

Conference Paper: FROSTER: Frozen CLIP is a Strong Teacher for Open-vocabulary Action Recognition

TitleFROSTER: Frozen CLIP is a Strong Teacher for Open-vocabulary Action Recognition
Authors
Issue Date7-May-2024
Abstract

In this paper, we introduce FROSTER, an effective framework for open-vocabulary action recognition. The CLIP model has achieved remarkable success in a range of image-based tasks, benefiting from its strong generalization capability stemming from pretaining on massive image-text pairs. However, applying CLIP directly to the open-vocabulary action recognition task is challenging due to the absence of temporal information in CLIP's pretraining. Further, fine-tuning CLIP on action recognition datasets may lead to overfitting and hinder its generalizability, resulting in unsatisfactory results when dealing with unseen actions.
To address these issues, FROSTER employs a residual feature distillation approach to ensure that CLIP retains its generalization capability while effectively adapting to the action recognition task. Specifically, the residual feature distillation treats the frozen CLIP model as a teacher to maintain the generalizability exhibited by the original CLIP and supervises the feature learning for the extraction of video-specific features to bridge the gap between images and videos. Meanwhile, it uses a residual sub-network for feature distillation to reach a balance between the two distinct objectives of learning generalizable and video-specific features.
We extensively evaluate FROSTER on open-vocabulary action recognition benchmarks under both base-to-novel and cross-dataset settings. FROSTER consistently achieves state-of-the-art performance on all datasets across the board. 


Persistent Identifierhttp://hdl.handle.net/10722/347578

 

DC FieldValueLanguage
dc.contributor.authorHuang, Xiaohu-
dc.contributor.authorZhou, Hao-
dc.contributor.authorYao, Kun-
dc.contributor.authorHan, Kai-
dc.date.accessioned2024-09-25T00:30:51Z-
dc.date.available2024-09-25T00:30:51Z-
dc.date.issued2024-05-07-
dc.identifier.urihttp://hdl.handle.net/10722/347578-
dc.description.abstract<p>In this paper, we introduce FROSTER, an effective framework for open-vocabulary action recognition. The CLIP model has achieved remarkable success in a range of image-based tasks, benefiting from its strong generalization capability stemming from pretaining on massive image-text pairs. However, applying CLIP directly to the open-vocabulary action recognition task is challenging due to the absence of temporal information in CLIP's pretraining. Further, fine-tuning CLIP on action recognition datasets may lead to overfitting and hinder its generalizability, resulting in unsatisfactory results when dealing with unseen actions.<br>To address these issues, FROSTER employs a residual feature distillation approach to ensure that CLIP retains its generalization capability while effectively adapting to the action recognition task. Specifically, the residual feature distillation treats the frozen CLIP model as a teacher to maintain the generalizability exhibited by the original CLIP and supervises the feature learning for the extraction of video-specific features to bridge the gap between images and videos. Meanwhile, it uses a residual sub-network for feature distillation to reach a balance between the two distinct objectives of learning generalizable and video-specific features.<br>We extensively evaluate FROSTER on open-vocabulary action recognition benchmarks under both base-to-novel and cross-dataset settings. FROSTER consistently achieves state-of-the-art performance on all datasets across the board. <br></p>-
dc.languageeng-
dc.relation.ispartofThe Twelfth International Conference on Learning Representations (ICLR) (07/05/2024-11/05/2024, Vienna)-
dc.titleFROSTER: Frozen CLIP is a Strong Teacher for Open-vocabulary Action Recognition -
dc.typeConference_Paper-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats