Skip to content

SDS-Lab/MoORE

 
 

Repository files navigation

MoORE: SVD-based Model MoE-ization for Conflict- and Oblivion-Resistant Multi-Task Adaptation

arXiv

Introduction

This repository includes the official implementation of MoORE. We propose a simple yet effective multi-task adaptation method, called Mixture of Orthogonal Rank-one Experts (MoORE).

Environment Setup

Please refer to MoE-PEFT Install Guide.

Models

we utilize meta-llama/Llama-3.1-8B-Instruct as the base model and adapt it by various multi-task adaptation methods.

Training and Evaluation

You can train and evaluate on the CSR-MTL dataset by running the code below.

bash train.sh

If you want to run experiments on the NLU-MTL dataset, you need to replace the "task_name" field in moe_peft_moore.json with "glue:cola;glue:mnli;glue:mrpc;glue:qnli;glue:qqp;glue:rte;glue:sst2".

Acknowledgement

The repo is based on the MoE-PEFT, we greatly appreciate the authors for their contributions.

📌 Citing our work

If you find our work useful, please cite it:

@misc{yuan2025mooresvdbasedmodelmoeization,
      title={MoORE: SVD-based Model MoE-ization for Conflict- and Oblivion-Resistant Multi-Task Adaptation}, 
      author={Shen Yuan and Yin Zheng and Taifeng Wang and Binbin Liu and Hongteng Xu},
      year={2025},
      eprint={2506.14436},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2506.14436}, 
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 93.8%
  • Jupyter Notebook 5.7%
  • Other 0.5%