Skip to content

lora fine-tuning #7

@lazyeden

Description

@lazyeden

Is it necessary to fine-tune all parameters during the training process? Why does the loss explode when I use lora fine-tuning Llama 2 7B?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions