Is it necessary to fine-tune all parameters during the training process? Why does the loss explode when I use lora fine-tuning Llama 2 7B?