Full Text

Turn on search term navigation

Copyright © 2022 Lei Zhao et al. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0/

Abstract

Few-shot classification aims to enable the network to acquire the ability of feature extraction and label prediction for the target categories given a few numbers of labeled samples. Current few-shot classification methods focus on the pretraining stage while fine-tuning by experience or not at all. No fine-tuning or insufficient fine-tuning may get low accuracy for the given tasks, while excessive fine-tuning will lead to poor generalization for unseen samples. To solve the above problems, this study proposes a hybrid fine-tuning strategy (HFT), including a few-shot linear discriminant analysis module (FSLDA) and an adaptive fine-tuning module (AFT). FSLDA constructs the optimal linear classification function under the few-shot conditions to initialize the last fully connected layer parameters, which fully excavates the professional knowledge of the given tasks and guarantees the lower bound of the model accuracy. AFT adopts an adaptive fine-tuning termination rule to obtain the optimal training epochs to prevent the model from overfitting. AFT is also built on FSLDA and outputs the final optimum hybrid fine-tuning strategy for a given sample size and layer frozen policy. We conducted extensive experiments on mini-ImageNet and tiered-ImageNet to prove the effectiveness of our proposed method. It achieves consistent performance improvements compared to existing fine-tuning methods under different sample sizes, layer frozen policies, and few-shot classification frameworks.

Details

Title
Hybrid Fine-Tuning Strategy for Few-Shot Classification
Author
Zhao, Lei 1   VIAFID ORCID Logo  ; Ou, Zhonghua 2   VIAFID ORCID Logo  ; Zhang, Lixun 2   VIAFID ORCID Logo  ; Li, Shuxiao 3   VIAFID ORCID Logo 

 Institute of Automation, Chinese Academy of Sciences, Beijing, China; University of Electronic Science and Technology of China, Chengdu, China 
 University of Electronic Science and Technology of China, Chengdu, China 
 Institute of Automation, Chinese Academy of Sciences, Beijing, China 
Editor
Nouman Ali
Publication year
2022
Publication date
2022
Publisher
John Wiley & Sons, Inc.
ISSN
16875265
e-ISSN
16875273
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2725125194
Copyright
Copyright © 2022 Lei Zhao et al. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0/