Pushing the Limits of Simple Pipelines for Few-Shot Learning:
External Data and Fine-Tuning Make a Difference

(CVPR 2022)

TL;DR: A three-stage pipeline for few-shot learning called PMF: Pre-training → Meta-training (ProtoNet) → Fine-tuning in meta-test.


Shell Xu Hu, Da Li*, Jan Stühmer*,
Minyoung Kim*, Timothy Hospedales

Paper Gradio demo Code Poster Slides

Few-shot learning (FSL) is an important and topical problem in computer vision that has motivated extensive research into numerous methods spanning from sophisticated meta-learning methods to simple transfer learning baselines. We seek to push the limits of a simple-but-effective pipeline for real-world few-shot image classification in practice. To this end, we explore few-shot learning from the perspective of neural architecture, as well as a three stage pipeline of pre-training on external data, meta-training with labelled few-shot tasks, and task-specific fine-tuning on unseen tasks. We investigate questions such as: ① How pre-training on external data benefits FSL? ② How state of the art transformer architectures can be exploited? and ③ How to best exploit fine-tuning? Ultimately, we show that a simple transformer-based pipeline yields surprisingly good performance on standard benchmarks such as Mini-ImageNet, CIFAR-FS, CDFSL and Meta-Dataset.

Overview


overview

Results


Bibtex


@inproceedings{hu2022pmf, author = {Hu, Shell Xu and Li, Da and St\"uhmer, Jan and Kim, Minyoung and Hospedales, Timothy M.}, title = {Pushing the Limits of Simple Pipelines for Few-Shot Learning: External Data and Fine-Tuning Make a Difference}, booktitle = {CVPR}, year={2022} }