axolotl-ai-cloud / axolotl

Go ahead and axolotl questions
https://axolotl-ai-cloud.github.io/axolotl/
Apache License 2.0
8.01k stars 884 forks source link

Llama3 Lora training fails to output and save #1650

Open austinm1120 opened 6 months ago

austinm1120 commented 6 months ago

Please check that this issue hasn't been reported before.

Expected Behavior

Train Llama 3 based model on dataset and output folder

Current behaviour

Model completes training then says it is saving, outputs a bunch of text and never saves.

Steps to reproduce

Accelerate launch -m axolotl.cli.train examples/llama-3/lora-8b.yml

Model loads and trains

Tries to save but outputs a load of text

[INFO] [axolotl.train.train:173] [PID:1578] [RANK:0] Training Completed!!! Saving pre-trained model to ./outputs/lora-out

image

image

Config yaml

base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer

load_in_8bit: true
load_in_4bit: false
strict: false

datasets:
  - path: output.jsonl
    type: sharegpt
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./outputs/lora-out

sequence_len: 4096
sample_packing: false
pad_to_sequence_len: true

adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:

wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:

gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 4
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:

warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
   pad_token: <|end_of_text|>

Possible solution

I have trained with the same dataset on Llama2 to see if it could've been that but i was able to just fine, was even able to download the model convert it and run it locally to ensure and isolate it to being related somehow to training the Lora on Llama 3

Which Operating Systems are you using?

Python Version

3.10.14

axolotl branch-commit

Main (Running on Jarvis.ai)

Acknowledgements

nazkhan-8451 commented 2 months ago

Try adding to your config:

lora_modules_to_save:
  - embed_tokens
  - lm_head