unslothai / unsloth

Finetune Llama 3.2, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
18.4k stars 1.29k forks source link

Getting "compiled_autograd.enable() requires no threads in backwards()" on running SFTTrainer on unsloth/gemma-2 models #1257

Closed sudha-kannan closed 2 weeks ago

sudha-kannan commented 2 weeks ago

Upon running the gemma-2-2b example notebook of unsloth specifically after running the command "trainer_stats = trainer.train()" the below cited error keeps occurring.

RuntimeError Traceback (most recent call last) in <cell line: 2>() 1 torch._dynamo.config.compiled_autograd = False ----> 2 trainer_stats = trainer.train()

36 frames /usr/local/lib/python3.10/dist-packages/torch/_dynamo/compiled_autograd.py in enable(compiler_fn) 497 @contextlib.contextmanager 498 def enable(compiler_fn): --> 499 prior = torch._C._dynamo.compiled_autograd.set_autograd_compiler( 500 functools.partial(AutogradCompilerInstance, compiler_fn) 501 )

RuntimeError: compiled_autograd.enable() requires no threads in backwards()

I have started facing error recently, were there any changes made to the libraries to cause this error? Any suggestions would be appreciated.