This repository does not contain the model weight files.
The 2-bit MLX quantized weights for GPT-OSS-120B are hosted on Hugging Face and can be accessed here:
https://huggingface.co/Open4bits/gpt-oss-120b-mlx-2Bit
Model Size: ~61GB Please ensure sufficient storage and bandwidth before downloading.