- [2024-10-12] README updated and reorganized.
- [2024-10-12] Training instructions and project structure refined.
This repository provides the official implementation for:
NTK-Guided Few-Shot Class Incremental Learning.
We study few-shot class-incremental learning (FSCIL) from a Neural Tangent Kernel (NTK) perspective and propose an effective framework that:
- improves feature adaptation across incremental sessions,
- supports multiple backbones and self-supervised initialization methods,
- provides flexible training settings for datasets, augmentations, and losses,
- achieves strong performance on standard FSCIL benchmarks.
- Training / evaluation pipeline for FSCIL
- Support for multiple datasets and architectures
- Self-supervised pre-training weight loading
- Configurable augmentation and alignment loss options
.
├── data/
│ └── index_list/
├── dataloader/
├── models/
├── LICENSE
├── README.md
├── cifar.py
├── requirements.txt
└── utils.py
conda create -n FSCIL python=3.8 -y
conda activate FSCILpip install -r requirements.txtBefore starting, it is important to understand the corresponding self-supervised learning repositories in depth. Since these repositories are originally designed for 224×224 image sizes, you may need to adapt them for smaller images such as CIFAR-100, Mini-ImageNet, and ImageNet-100 used in FSCIL. After the required modifications, you can perform self-supervised pre-training on the base session of each dataset, train for 1000 epochs, and save the resulting pre-trained weights.
Supported self-supervised frameworks include:
- MAE: Learns image representations by masking random patches and reconstructing the missing content.
- SparK: Efficient sparse pre-training for faster convergence and strong representation learning.
- DINO: A self-distillation framework for learning consistent features across views.
- MoCo-v3: An improved momentum contrastive learning framework.
- SimCLR: A simple and effective contrastive learning approach based on strong augmentations.
- BYOL: Learns useful representations without negative pairs.
Using CIFAR-100 as an example, we provide pre-trained weights obtained from the above self-supervised frameworks for several architectures, including:
resnet18,resnet12,vit_tiny,vit_small
You can directly download the pre-trained weights from:
- Google Drive:
https://drive.google.com/drive/folders/1RhyhZXETrxZqCkVb7UhQMIoQWZJqLogs?usp=drive_link
After downloading, place the pretrain_weights/ folder in the root directory of this project.
Alternatively, you may pre-train the models yourself and modify the corresponding path in the
load_self_pretrain_weightsfunction inutils.py.
We provide a variety of configurable settings for flexible experimentation:
-
Datasets:
cifar100,cub200,miniimagenet,imagenet100,imagenet1000
-
Image Transformation Methods:
Normal,AMDIM,SimCLR,AutoAug,RandAug
-
Network Architectures:
resnet12,resnet18,vit_tiny,vit_small
-
Self-Supervised Pre-Trained Weights:
dino,spark,mae,moco-v3,simclr,byol
-
Alignment Losses:
curriculum,arcface,sphereface,cosface,crossentropy
You can modify these settings directly in the command line for customized experiments.
To get started, simply run:
python cifar.pyYou can further customize the dataset, architecture, pre-training method, augmentation strategy, and alignment loss in your experiment commands.
We present the results on CIFAR-100 below.
We have removed the NTK constraint on the linear layers for content that will be included in a future paper. We apologize for the inconvenience. However, this adjustment does not significantly affect performance and should not hinder further research or development based on this work.
If you find this repository useful, please cite our paper:
@article{liu2024ntk,
title={Ntk-guided few-shot class incremental learning},
author={Liu, Jingren and Ji, Zhong and Pang, Yanwei and Yu, Yunlong},
journal={IEEE Transactions on Image Processing},
year={2024},
publisher={IEEE}
}For questions and discussions:
- Email: jrl0219@tju.edu.cn
- Issues: please open a GitHub Issue in this repo.
