Replies: 2 comments 3 replies
-
|
Uploading oQ+ Quantization models here for anyone interested: |
Beta Was this translation helpful? Give feedback.
3 replies
-
|
@splatstx Really great if we could have a Qwen3.6-27B in oQ3.5(e) as the oQ4's don't fit on 24GB Macs. Just saw them on HF. THANK YOU! 🤗 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I have really gotten the AI fever and want to find ways to help, but I can't program like you guys. I have been working on the oQ Quantization and posting them in HF
If there is testing, Quantizations or something else I can help with please let me know. This is probably the best LLM application I have used so far.
Beta Was this translation helpful? Give feedback.
All reactions