Open VelocityRa opened 6 months ago
Submitteda a PR https://github.com/OpenAccess-AI-Collective/axolotl/pull/1716
I get the same error running the CodeLLaMA model. In each case, I attempt, accelerate launch -m axolotl.cli.train
with one of:
examples/code-llama/7b/lora.yml
examples/code-llama/7b/qlora.yml
examples/code-llama/13b/lora.yml
examples/code-llama/13b/qlora.yml
examples/code-llama/34b/lora.yml
examples/code-llama/34b/qlora.yml
Please check that this issue hasn't been reported before.
Expected Behavior
I'm running:
The command should complete successfully, outputting the preprocessed dataset.
Current behaviour
It errors out:
Steps to reproduce
CUDA_VISIBLE_DEVICES="" python -m axolotl.cli.preprocess examples/llama-3/lora-8b.yml
Config yaml
No response
Possible solution
Set
sample_packing
tofalse
in the example config? But it's explicitly set totrue
there, so not sure if something else is wrong.Similar issue: https://github.com/OpenAccess-AI-Collective/axolotl/issues/999
Edit: Issue happens for
mistral/lora.yml
too.Which Operating Systems are you using?
Python Version
3.10
axolotl branch-commit
main/22ae21a
Acknowledgements