[vLLM-Omni only support PEFT LoRA](https://docs.vllm.ai/projects/vllm-omni/en/latest/user_guide/examples/offline_inference/lora_inference/#lora-adapter-format)