unslothai / unsloth

Finetune Llama 3.2, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
18.4k stars 1.29k forks source link

fix/autograd_compile #1256

Closed Erland366 closed 2 weeks ago

Erland366 commented 2 weeks ago

By disabling it explicitly, now error on #1250 is gone

The weird thing is, under torch._dynamo.config, compiled_autograd is already disabled .-.

Erland366 commented 2 weeks ago

cc @danielhanchen

Erland366 commented 2 weeks ago

whoops. still failed in colab, will closed it first