This model is hosted on Hugging Face and can be downloaded directly from the official repository:
π Hugging Face Model Page https://huggingface.co/Open4bits/LFM2.5-1.2B-Base-Quantized
All quantized variants (FP16, FP8, INT8, NF4) are available there for easy integration with Transformers, Colab, and Kaggle.