axolotl-ai-cloud / axolotl

Go ahead and axolotl questions
https://axolotl-ai-cloud.github.io/axolotl/
Apache License 2.0
7.91k stars 870 forks source link

Mistral qlora example fails #836

Closed Nixellion closed 7 months ago

Nixellion commented 1 year ago

Please check that this issue hasn't been reported before.

Expected Behavior

Qlora should train

Current behaviour

I get the following error:

Traceback (most recent call last):
  File "/root/miniconda3/envs/py3.10/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/root/miniconda3/envs/py3.10/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/workspace/axolotl/src/axolotl/cli/train.py", line 38, in <module>
    fire.Fire(do_cli)
  File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/workspace/axolotl/src/axolotl/cli/train.py", line 34, in do_cli
    train(cfg=parsed_cfg, cli_args=parsed_cli_args, dataset_meta=dataset_meta)
  File "/workspace/axolotl/src/axolotl/train.py", line 124, in train
    trainer.train(resume_from_checkpoint=resume_from_checkpoint)
  File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/trainer.py", line 1591, in train
    return inner_training_loop(
  File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/trainer.py", line 1984, in _inner_training_loop
    self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
  File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/trainer.py", line 2328, in _maybe_log_save_evaluate
    metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
  File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/trainer.py", line 3066, in evaluate
    output = eval_loop(
  File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/trainer.py", line 3214, in evaluation_loop
    if has_length(dataloader):
  File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/trainer_utils.py", line 623, in has_length
    return len(dataset) is not None
  File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 486, in __len__
    return len(self._index_sampler)
ValueError: __len__() should return >= 0
 17%|______________________________________                                                                                                                                                                                          | 1/6 [00:30<02:30, 30.01s/it]
Traceback (most recent call last):
  File "/root/miniconda3/envs/py3.10/bin/accelerate", line 8, in <module>
    sys.exit(main())
  File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 47, in main
    args.func(args)
  File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/accelerate/commands/launch.py", line 986, in launch_command
    simple_launcher(args)
  File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/accelerate/commands/launch.py", line 628, in simple_launcher
    raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/root/miniconda3/envs/py3.10/bin/python', '-m', 'axolotl.cli.train', 'examples/mistral/qlora.yml']' returned non-zero exit status 1.

Steps to reproduce

I'm using docker axolotl, on runpod (but same issue happens when using it on Windows, Docker Desktop).

To test that things work or not I'm trying to run one of the provided examples: examples/mistral/qlora.yml

Out of the box it does not work at all, as discussed in this issue: https://github.com/OpenAccess-AI-Collective/axolotl/issues/835

After downgrading peft and reinstalling pytorch, however, I get a different error:

Traceback (most recent call last):
  File "/root/miniconda3/envs/py3.10/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/root/miniconda3/envs/py3.10/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/workspace/axolotl/src/axolotl/cli/train.py", line 38, in <module>
    fire.Fire(do_cli)
  File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/workspace/axolotl/src/axolotl/cli/train.py", line 34, in do_cli
    train(cfg=parsed_cfg, cli_args=parsed_cli_args, dataset_meta=dataset_meta)
  File "/workspace/axolotl/src/axolotl/train.py", line 124, in train
    trainer.train(resume_from_checkpoint=resume_from_checkpoint)
  File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/trainer.py", line 1591, in train
    return inner_training_loop(
  File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/trainer.py", line 1984, in _inner_training_loop
    self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
  File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/trainer.py", line 2328, in _maybe_log_save_evaluate
    metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
  File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/trainer.py", line 3066, in evaluate
    output = eval_loop(
  File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/trainer.py", line 3214, in evaluation_loop
    if has_length(dataloader):
  File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/trainer_utils.py", line 623, in has_length
    return len(dataset) is not None
  File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 486, in __len__
    return len(self._index_sampler)
ValueError: __len__() should return >= 0
 17%|______________________________________                                                                                                                                                                                          | 1/6 [00:30<02:30, 30.01s/it]
Traceback (most recent call last):
  File "/root/miniconda3/envs/py3.10/bin/accelerate", line 8, in <module>
    sys.exit(main())
  File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 47, in main
    args.func(args)
  File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/accelerate/commands/launch.py", line 986, in launch_command
    simple_launcher(args)
  File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/accelerate/commands/launch.py", line 628, in simple_launcher
    raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/root/miniconda3/envs/py3.10/bin/python', '-m', 'axolotl.cli.train', 'examples/mistral/qlora.yml']' returned non-zero exit status 1.

Config yaml

examples/mistral/qlora.yml

Possible solution

No response

Which Operating Systems are you using?

Python Version

3.10

axolotl branch-commit

runpod-main-latest

Acknowledgements

fpreiss commented 1 year ago

Can confirm, this bug is introduced with commit 641e6f7e. It seems to work fine on 6dc68a6.

Nixellion commented 1 year ago

I'm failing to checkout an older commit in runpod docker container:

git fetch
git checkout 6dc68a6
error: pathspec '6dc68a6' did not match any file(s) known to git

Any idea why this is?

EDIT: Had to do git fetch --unshallow to fix it.

winglian commented 1 year ago

the val_set_size is too small I believe. increase it from 0.01 to 0.05

Nixellion commented 1 year ago

I'll try. Though as I said, this is in the example qlora.yml config. It should probably work out of the box.

Nixellion commented 1 year ago

Meanwhile, rolling back to an older commit 6dc68a6 worked. A single line fix:

pip install peft==0.6.0 && conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia && fit fetch --all && git fetch --unshallow && git checkout 6dc68a6 && echo "PATCHED"
NanoCode012 commented 7 months ago

Closing this due to stale. The potential issue should be validation dataset too small + sample_packing. Can fix by adjusting either.

Please let us know if this re-occurs.