Zhong Ji, Rongshuai Wei, Jingren Liu, Yanwei Pang, Jungong Han
- [2026-03-09] Repository initialized.
- [2026-03-09] README released.
- [2026-03-13] Training and evaluation code will be released.
- [2026-03-13] Pretrained checkpoints will be released.
- [2026-03-13] Reproducibility guide will be released.
This repository provides the official implementation for:
Interpretable Few-Shot Image Classification via Prototypical Concept-Guided Mixture of LoRA Experts
Self-Explainable Models (SEMs) rely on Prototypical Concept Learning (PCL) to make visual recognition more interpretable, but they often struggle in few-shot settings due to limited supervision. To address this issue, we propose a Few-Shot Prototypical Concept Classification (FSPCC) framework that systematically tackles two core challenges in low-data regimes:
- parametric imbalance between the backbone and the concept learning module,
- representation misalignment between feature embeddings and concept activations.
Our method introduces:
- a Mixture of LoRA Experts (MoLE) for parameter-efficient adaptation,
- cross-module concept guidance to align feature representations with prototypical concept activations,
- a multi-level feature preservation strategy to fuse spatial and semantic cues from multiple layers,
- a geometry-aware concept discrimination loss to reduce concept overlap and improve interpretability.
Experimental results on six popular benchmarks show that our method consistently outperforms existing self-explainable models by a clear margin in few-shot image classification.
- An interpretable few-shot image classification framework built on prototypical concept learning
- A parameter-efficient adaptation design using Mixture of LoRA Experts (MoLE)
- Cross-module concept guidance for better feature-concept alignment
- Multi-level feature preservation for stronger low-data representations
- Geometry-aware concept discrimination for clearer and less-overlapping concepts
- Strong results on CUB-200-2011, mini-ImageNet, CIFAR-FS, Stanford Cars, FGVC-Aircraft, and DTD
The current repository is organized as follows:
LE-ProtoPNet/
├── configs/ # experiment configuration files
├── data/ # dataset storage / preprocessing utilities
├── evaluation/ # evaluation metrics and testing utilities
├── models/ # backbone networks, concept modules, MoLE implementation
├── shared_utils/ # common utilities (logging, helpers, training utils)
├── train_fsl.py # few-shot training entry point
├── test_fsl.py # few-shot evaluation script
└── env.yaml # conda environment specification
conda env create -f env.yaml
conda activate LEProtoPNetOur framework supports several commonly used few-shot image classification benchmarks:
- CUB-200-2011
- mini-ImageNet
- CIFAR-FS
- Stanford Cars
- FGVC Aircraft
- DTD (Describable Textures Dataset)
After downloading the datasets, organize them as follows:
data/
├── cub/
│ ├── images
│ └── splits
├── mini_imagenet/
│ ├── images
│ └── splits
├── cifar_fs/
│ └── splits
├── stanford_cars/
├── fgvc_aircraft/
└── dtd/
Dataset split files should follow the standard few-shot meta-learning splits used in prior work.
The training pipeline is implemented in train_fsl.py.
python train_fsl.pyEvaluation is implemented in test_fsl.py.
python test_fsl.pyIf you find this repository useful for your research, please cite:
@article{ji2026interpretable,
title={Interpretable Few-Shot Image Classification via Prototypical Concept-Guided Mixture of LoRA Experts},
author={Ji, Zhong and Wei, Rongshuai and Liu, Jingren and Pang, Yanwei and Han, Jungong},
journal={IEEE Transactions on Image Processing},
year={2026},
publisher={IEEE}
}For questions, suggestions, or collaborations:
- Email: jrl0219@tju.edu.cn
- GitHub Issues: Please open an issue in this repository.