Skip to content

Programmergg/NTK-FSCIL

Repository files navigation

NTK-Guided Few-Shot Class Incremental Learning

arXiv License Python

Jingren Liu, Zhong Ji, Yanwei Pang, Yunlong Yu


🔥 News

  • [2024-10-12] README updated and reorganized.
  • [2024-10-12] Training instructions and project structure refined.

🧩 Overview

This repository provides the official implementation for:

NTK-Guided Few-Shot Class Incremental Learning.

We study few-shot class-incremental learning (FSCIL) from a Neural Tangent Kernel (NTK) perspective and propose an effective framework that:

  • improves feature adaptation across incremental sessions,
  • supports multiple backbones and self-supervised initialization methods,
  • provides flexible training settings for datasets, augmentations, and losses,
  • achieves strong performance on standard FSCIL benchmarks.

✅ Key Features

  • Training / evaluation pipeline for FSCIL
  • Support for multiple datasets and architectures
  • Self-supervised pre-training weight loading
  • Configurable augmentation and alignment loss options

🗂️ Repository Structure

.
├── data/
│   └── index_list/
├── dataloader/
├── models/
├── LICENSE
├── README.md
├── cifar.py
├── requirements.txt
└── utils.py

🧰 Installation

1) Create environment

conda create -n FSCIL python=3.8 -y
conda activate FSCIL

2) Install dependencies

pip install -r requirements.txt

💫 Self-Supervised Pre-Training Weights

Before starting, it is important to understand the corresponding self-supervised learning repositories in depth. Since these repositories are originally designed for 224×224 image sizes, you may need to adapt them for smaller images such as CIFAR-100, Mini-ImageNet, and ImageNet-100 used in FSCIL. After the required modifications, you can perform self-supervised pre-training on the base session of each dataset, train for 1000 epochs, and save the resulting pre-trained weights.

Supported self-supervised frameworks include:

  • MAE: Learns image representations by masking random patches and reconstructing the missing content.
  • SparK: Efficient sparse pre-training for faster convergence and strong representation learning.
  • DINO: A self-distillation framework for learning consistent features across views.
  • MoCo-v3: An improved momentum contrastive learning framework.
  • SimCLR: A simple and effective contrastive learning approach based on strong augmentations.
  • BYOL: Learns useful representations without negative pairs.

Using CIFAR-100 as an example, we provide pre-trained weights obtained from the above self-supervised frameworks for several architectures, including:

  • resnet18, resnet12, vit_tiny, vit_small

Download

You can directly download the pre-trained weights from:

  • Google Drive: https://drive.google.com/drive/folders/1RhyhZXETrxZqCkVb7UhQMIoQWZJqLogs?usp=drive_link

After downloading, place the pretrain_weights/ folder in the root directory of this project.

Alternatively, you may pre-train the models yourself and modify the corresponding path in the load_self_pretrain_weights function in utils.py.


📦 Supported Configurations

We provide a variety of configurable settings for flexible experimentation:

  • Datasets:

    • cifar100, cub200, miniimagenet, imagenet100, imagenet1000
  • Image Transformation Methods:

    • Normal, AMDIM, SimCLR, AutoAug, RandAug
  • Network Architectures:

    • resnet12, resnet18, vit_tiny, vit_small
  • Self-Supervised Pre-Trained Weights:

    • dino, spark, mae, moco-v3, simclr, byol
  • Alignment Losses:

    • curriculum, arcface, sphereface, cosface, crossentropy

You can modify these settings directly in the command line for customized experiments.


🚀 Quick Start

Train and Evaluate

To get started, simply run:

python cifar.py

You can further customize the dataset, architecture, pre-training method, augmentation strategy, and alignment loss in your experiment commands.


📈 Results

We present the results on CIFAR-100 below.

CIFAR-100 Results


📝 Notes

We have removed the NTK constraint on the linear layers for content that will be included in a future paper. We apologize for the inconvenience. However, this adjustment does not significantly affect performance and should not hinder further research or development based on this work.


🧾 Citation

If you find this repository useful, please cite our paper:

@article{liu2024ntk,
  title={Ntk-guided few-shot class incremental learning},
  author={Liu, Jingren and Ji, Zhong and Pang, Yanwei and Yu, Yunlong},
  journal={IEEE Transactions on Image Processing},
  year={2024},
  publisher={IEEE}
}

⭐ Star History

Star History Chart


📬 Contact

For questions and discussions:


About

Repo for NTK-Guided Few-Shot Class Incremental Learning (TIP2024)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages