Hi @JackYoung27 🤗
Niels here from the open-source team at Hugging Face. I discovered your work through Hugging Face's daily papers as yours got featured: https://huggingface.co/papers/2604.01168.
The paper page lets people discuss about your paper and lets them find artifacts about it (your models, datasets or demo for instance), you can also claim the paper as yours which will show up on your public profile at HF, and add Github and project page URLs.
It's great to see the s0-tuning library already available on GitHub! It's a very clever approach for hybrid architectures. I noticed that the specific tuned states (the ~48MB files mentioned in the paper) and the curated dataset of 48 verified HumanEval solutions aren't currently linked for download.
It would be awesome to make these checkpoints and the dataset available on the 🤗 hub to improve their discoverability and reproducibility.
Uploading tuned states
Since S0 states are lightweight (~48 MB), they would be perfect to host on https://huggingface.co/models. You could host them as individual repositories or as "adapters" linked to the base models (Qwen3.5/FalconH1). This would allow the community to use your tuned results out-of-the-box using your library via hf_hub_download.
Uploading dataset
Making the verified solutions available on 🤗 would also be a great resource for the community:
from datasets import load_dataset
dataset = load_dataset("your-hf-username/humaneval-s0-train")
See here for a guide: https://huggingface.co/docs/datasets/loading.
Besides that, there's the dataset viewer which allows people to quickly explore the data in the browser.
Let me know if you're interested or need any help regarding this!
Cheers,
Niels
ML Engineer @ HF 🤗
Hi @JackYoung27 🤗
Niels here from the open-source team at Hugging Face. I discovered your work through Hugging Face's daily papers as yours got featured: https://huggingface.co/papers/2604.01168.
The paper page lets people discuss about your paper and lets them find artifacts about it (your models, datasets or demo for instance), you can also claim the paper as yours which will show up on your public profile at HF, and add Github and project page URLs.
It's great to see the
s0-tuninglibrary already available on GitHub! It's a very clever approach for hybrid architectures. I noticed that the specific tuned states (the ~48MB files mentioned in the paper) and the curated dataset of 48 verified HumanEval solutions aren't currently linked for download.It would be awesome to make these checkpoints and the dataset available on the 🤗 hub to improve their discoverability and reproducibility.
Uploading tuned states
Since S0 states are lightweight (~48 MB), they would be perfect to host on https://huggingface.co/models. You could host them as individual repositories or as "adapters" linked to the base models (Qwen3.5/FalconH1). This would allow the community to use your tuned results out-of-the-box using your library via
hf_hub_download.Uploading dataset
Making the verified solutions available on 🤗 would also be a great resource for the community:
See here for a guide: https://huggingface.co/docs/datasets/loading.
Besides that, there's the dataset viewer which allows people to quickly explore the data in the browser.
Let me know if you're interested or need any help regarding this!
Cheers,
Niels
ML Engineer @ HF 🤗