File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Appears in Collections:
Conference Paper: Not all models are equal: Predicting model transferability in a self-challenging fisher space
Title | Not all models are equal: Predicting model transferability in a self-challenging fisher space |
---|---|
Authors | |
Issue Date | 2022 |
Publisher | Ortra Ltd.. |
Citation | European Conference on Computer Vision (ECCV) (Hybrid), Tel Aviv, Israel, October 23-27, 2022 How to Cite? |
Abstract | This paper addresses an important problem of ranking the pre-trained deep neural networks and screening the most transferable ones for downstream tasks. It is challenging because the ground-truth model ranking for each task can only be generated by fine-tuning the pre-trained models on the target dataset, which is brute-force and computationally expensive. Recent advanced methods proposed several lightweight transferability metrics to predict the fine-tuning results. However, these approaches only capture static representations but neglect the fine-tuning dynamics. To this end, this paper proposes a new transferability metric, called Self-challenging Fisher Discriminant Analysis (SFDA), which has many appealing benefits that existing works do not have. First, SFDA can embed the static features into a Fisher space and refine them for better separability between classes. Second, SFDA uses a self-challenging mechanism to encourage different pre-trained models to differentiate on hard examples. Third, SFDA can easily select multiple pre-trained models for the model ensemble. Extensive experiments on $33$ pre-trained models of $11$ downstream tasks show that SFDA is efficient, effective, and robust when measuring the transferability of pre-trained models. For instance, compared with the state-of-the-art method NLEEP, SFDA demonstrates an average of $59.1$\% gain while bringing $22.5$x speedup in wall-clock time. |
Persistent Identifier | http://hdl.handle.net/10722/315546 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Shao, W | - |
dc.contributor.author | Zhao, X | - |
dc.contributor.author | Ge, Y | - |
dc.contributor.author | Shan, Y | - |
dc.contributor.author | Luo, P | - |
dc.date.accessioned | 2022-08-19T08:59:54Z | - |
dc.date.available | 2022-08-19T08:59:54Z | - |
dc.date.issued | 2022 | - |
dc.identifier.citation | European Conference on Computer Vision (ECCV) (Hybrid), Tel Aviv, Israel, October 23-27, 2022 | - |
dc.identifier.uri | http://hdl.handle.net/10722/315546 | - |
dc.description.abstract | This paper addresses an important problem of ranking the pre-trained deep neural networks and screening the most transferable ones for downstream tasks. It is challenging because the ground-truth model ranking for each task can only be generated by fine-tuning the pre-trained models on the target dataset, which is brute-force and computationally expensive. Recent advanced methods proposed several lightweight transferability metrics to predict the fine-tuning results. However, these approaches only capture static representations but neglect the fine-tuning dynamics. To this end, this paper proposes a new transferability metric, called Self-challenging Fisher Discriminant Analysis (SFDA), which has many appealing benefits that existing works do not have. First, SFDA can embed the static features into a Fisher space and refine them for better separability between classes. Second, SFDA uses a self-challenging mechanism to encourage different pre-trained models to differentiate on hard examples. Third, SFDA can easily select multiple pre-trained models for the model ensemble. Extensive experiments on $33$ pre-trained models of $11$ downstream tasks show that SFDA is efficient, effective, and robust when measuring the transferability of pre-trained models. For instance, compared with the state-of-the-art method NLEEP, SFDA demonstrates an average of $59.1$\% gain while bringing $22.5$x speedup in wall-clock time. | - |
dc.language | eng | - |
dc.publisher | Ortra Ltd.. | - |
dc.title | Not all models are equal: Predicting model transferability in a self-challenging fisher space | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Luo, P: pluo@hku.hk | - |
dc.identifier.authority | Luo, P=rp02575 | - |
dc.identifier.hkuros | 335574 | - |
dc.publisher.place | Israel | - |