OpenAccess-AI-Collective / axolotl

Go ahead and axolotl questions
https://openaccess-ai-collective.github.io/axolotl/
Apache License 2.0
6.85k stars 750 forks source link

Llama-3 DPO is broken in main #1624

Open maziyarpanahi opened 1 month ago

maziyarpanahi commented 1 month ago

Please check that this issue hasn't been reported before.

Expected Behavior

Should be able to fine-tune using DPO with any chat template and type

Current behaviour

regardless of dataset type or chat_template, the training fails with:

Traceback (most recent call last):
  File "/home/maziyar/anaconda3/envs/axolotl/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/maziyar/anaconda3/envs/axolotl/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/maziyar/apps/LLMs/fine-tuning/axolotl-finetune/axolotl/src/axolotl/cli/train.py", line 70, in <module>
    fire.Fire(do_cli)
  File "/home/maziyar/anaconda3/envs/axolotl/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/home/maziyar/anaconda3/envs/axolotl/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/home/maziyar/anaconda3/envs/axolotl/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/home/maziyar/apps/LLMs/fine-tuning/axolotl-finetune/axolotl/src/axolotl/cli/train.py", line 38, in do_cli
    return do_train(parsed_cfg, parsed_cli_args)
  File "/home/maziyar/apps/LLMs/fine-tuning/axolotl-finetune/axolotl/src/axolotl/cli/train.py", line 59, in do_train
    register_llama3_template()
  File "/home/maziyar/apps/LLMs/fine-tuning/axolotl-finetune/axolotl/src/axolotl/prompt_strategies/sharegpt.py", line 50, in register_llama3_template
    sep_style=SeparatorStyle.LLAMA3,
  File "/home/maziyar/anaconda3/envs/axolotl/lib/python3.10/enum.py", line 437, in __getattr__
    raise AttributeError(name) from None
AttributeError: LLAMA3. Did you mean: 'LLAMA2'?

Steps to reproduce

1- pull the latest from main and install it 2- run any DPO training with any datasets even with chatml template/type

Config yaml

base_model: meta-llama/Meta-Llama-3-8B
 model_type: AutoModelForCausalLM
 tokenizer_type: AutoTokenizer

 load_in_8bit: false
 load_in_4bit: true
 strict: false

rl: dpo
chat_template: chatml
datasets:
   - path: Intel/orca_dpo_pairs
     split: train
     type: chatml.intel

 dataset_prepared_path:
 val_set_size: 0
 output_dir: ./qlora-out

 adapter: qlora
 lora_model_dir:

 sequence_len: 4096
 sample_packing: false
 pad_to_sequence_len: true

 lora_r: 32
 lora_alpha: 16
 lora_dropout: 0.05
 lora_target_modules:
 lora_target_linear: true
 lora_fan_in_fan_out:

 wandb_project:
 wandb_entity:
 wandb_watch:
 wandb_name:
 wandb_log_model:

 gradient_accumulation_steps: 4
 micro_batch_size: 2
 num_epochs: 4
 optimizer: paged_adamw_32bit
 lr_scheduler: cosine
 learning_rate: 0.0002

 train_on_inputs: false
 group_by_length: false
 bf16: auto
 fp16:
 tf32: false

 gradient_checkpointing: true
 early_stopping_patience:
 resume_from_checkpoint:
 local_rank:
 logging_steps: 1
 xformers_attention:
 flash_attention: true

 warmup_steps: 10
 evals_per_epoch: 4
 eval_table_size:
 saves_per_epoch: 1
 debug:
 deepspeed:
 weight_decay: 0.0
 fsdp:
 fsdp_config:

Possible solution

No response

Which Operating Systems are you using?

Python Version

3.10

axolotl branch-commit

main

Acknowledgements

IlonaBrst commented 1 month ago

I have the same issue did you find a fix ?

maziyarpanahi commented 1 month ago

I have the same issue did you find a fix ?

Unfortunately not. I am waiting for @winglian to chime in

benredmond commented 1 month ago

looks like fastchat needs to be updated to include this commit adding support for llama3 https://github.com/lm-sys/FastChat/commit/27a05b04a35510afb1d767ae7e5990cbd278f8fe

venkatasg commented 1 month ago

I'm getting a similar error for FFT with latest repo, and updating fastchat. Hasn't the commit @benredmond raised been merged? Not sure why this error still crops up then.

winglian commented 1 month ago

@maziyarpanahi did you try doing a pip uninstall fschat first and then installing fastchat again?

I'm using the latest docker image for axolotl and with this YAML and training starts and I let it run a few steps with any errors.

https://wandb.ai/oaaic/issue-1624/runs/gijas8p2

maziyarpanahi commented 1 month ago

@maziyarpanahi did you try doing a pip uninstall fschat first and then installing fastchat again?

This is interesting! I usually follow the README to install Axolotl. I will give this a shot and get back to you.

I'm using the latest docker image for axolotl and with this YAML and training starts and I let it run a few steps with any errors.

https://wandb.ai/oaaic/issue-1624/runs/gijas8p2

This config is using chatml, did you change both chatml to llama3? (so far I tried DPO and ORPO with llama3 and got this error)

UPDATE: I just tried uninstall/force-reinstal of fastchat with main, but it doesn't work. In fact, even for SFT I get:

AttributeError: LLAMA3. Did you mean: 'LLAMA2'?
maziyarpanahi commented 1 month ago

@winglian I am actually going to test the Docker, could you please share which image did you use? (is it main-latest?)

IlonaBrst commented 1 month ago

Hey, I' m getting the same issue trying to finetune Mistral AttributeError: LLAMA3. Did you mean: 'LLAMA2'?

with this yaml

base_model: mistralai/Mistral-7B-v0.1 model_type: MistralForCausalLM tokenizer_type: LlamaTokenizer

load_in_8bit: false load_in_4bit: true strict: false datasets:

adapter: qlora lora_model_dir:

sequence_len: 8192 sample_packing: false pad_to_sequence_len: true

lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: lora_target_modules:

wandb_project: Mistral_Jobmaker wandb_entity: wandb_watch: wandb_name: wandb_log_model:

gradient_accumulation_steps: 1 micro_batch_size: 10 num_epochs: 1 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.0002

train_on_inputs: false group_by_length: false bf16: true fp16: false tf32: false

gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true

loss_watchdog_threshold: 5.0 loss_watchdog_patience: 3

warmup_steps: 10 eval_steps: 0.05 eval_table_size: eval_table_max_new_tokens: 128 save_steps: debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: bos_token: "" eos_token: "" unk_token: ""

nampdn commented 1 month ago

Here's work around while waiting for the fix: git checkout fff06af8d0514d41a62ce7a15ac17353b3b39fce and pip install -e . again.

winglian commented 1 month ago

@winglian I am actually going to test the Docker, could you please share which image did you use? (is it main-latest?)

yeah, main-latest of this image https://hub.docker.com/r/winglian/axolotl-cloud/tags

winglian commented 1 month ago

@maziyarpanahi @IlonaBrst @nampdn it sounds like a broken virtual environment if you're installing fastchat from main and it's still raising an AttributeError for LLAMA3.

thepowerfuldeez commented 1 month ago

Hi! It's definitely something with the fastchat. I've tried my older environment with latest axolotl version and it works for me.

maziyarpanahi commented 1 month ago

So, my findings so far and sorry if it's not complete:

pip install --force-reinstall git+https://github.com/lm-sys/FastChat.git@27a05b04a35510afb1d767ae7e5990cbd278f8fe

I will test this with llama3 template for DPO/SFT just to be sure and will report back.

sklyar61 commented 1 month ago

Hello, colleagues and Wing Lian! I am encountering the same error. I am fixing the error by replacing the file src/axolotl/cli/train.py. In doing so, I am not using the DPO/KTO/ORPO strategies and so on.

I noticed the error a week ago. Before that, I was using the latest container from the docker hub and everything was working.

DavidFarago commented 1 month ago

I am encountering the same error, too. @sklyar61 which version of src/axolotl/cli/train.py are you using?

sklyar61 commented 1 month ago

I am encountering the same error, too. @sklyar61 which version of src/axolotl/cli/train.py are you using?

version=“0.4.0”

maziyarpanahi commented 1 month ago

Version has been 0.4.0 for a while now. So I would say the commit hash would be more accurate.