Hi @Manchery 🤗
I'm Niels and work as part of the open-source team at Hugging Face. I discovered your work through Hugging Face's daily papers as yours got featured: https://huggingface.co/papers/2601.19834.
The paper page lets people discuss about your paper and lets them find artifacts about it. I see you've already shared the VisWorld-Eval dataset on the Hub (https://huggingface.co/datasets/thuml/VisWorld-Eval), which is great for discoverability and accessibility!
Would you also like to host the BAGEL-7B-MoT model checkpoints you've post-trained on https://huggingface.co/models? Hosting the weights on Hugging Face will give your work more visibility and enable the community to experiment with the interleaved visual-verbal CoT reasoning you've developed.
If you're down, leaving a guide here. If it's a custom PyTorch model, you can use the PyTorchModelHubMixin class which adds from_pretrained and push_to_hub to the model, letting people download and use it right away. Alternatively, you can directly upload the weights through the UI.
After uploaded, we can also link the models to the paper page (read here) so people can discover your models directly.
You can also build a demo for your model on Spaces, and we can provide you a ZeroGPU grant, which gives you access to A100 GPUs for free.
Let me know if you're interested or need any guidance!
Kind regards,
Niels
Hi @Manchery 🤗
I'm Niels and work as part of the open-source team at Hugging Face. I discovered your work through Hugging Face's daily papers as yours got featured: https://huggingface.co/papers/2601.19834.
The paper page lets people discuss about your paper and lets them find artifacts about it. I see you've already shared the VisWorld-Eval dataset on the Hub (https://huggingface.co/datasets/thuml/VisWorld-Eval), which is great for discoverability and accessibility!
Would you also like to host the BAGEL-7B-MoT model checkpoints you've post-trained on https://huggingface.co/models? Hosting the weights on Hugging Face will give your work more visibility and enable the community to experiment with the interleaved visual-verbal CoT reasoning you've developed.
If you're down, leaving a guide here. If it's a custom PyTorch model, you can use the PyTorchModelHubMixin class which adds
from_pretrainedandpush_to_hubto the model, letting people download and use it right away. Alternatively, you can directly upload the weights through the UI.After uploaded, we can also link the models to the paper page (read here) so people can discover your models directly.
You can also build a demo for your model on Spaces, and we can provide you a ZeroGPU grant, which gives you access to A100 GPUs for free.
Let me know if you're interested or need any guidance!
Kind regards,
Niels