- [HF AWQ](https://huggingface.co/docs/transformers/main/en/quantization/awq) - [Transformer meets AWQ quantization (AutoAWQ and LLM-AWQ) for lighter and faster quantized inference of LLMs](https://colab.research.google.com/drive/1HzZH89yAXJaZgwJDhQj9LqSBux932BvY)