Closed sudha-kannan closed 2 weeks ago
Upon running the gemma-2-2b example notebook of unsloth specifically after running the command "trainer_stats = trainer.train()" the below cited error keeps occurring.
RuntimeError Traceback (most recent call last) in <cell line: 2>() 1 torch._dynamo.config.compiled_autograd = False ----> 2 trainer_stats = trainer.train()
36 frames /usr/local/lib/python3.10/dist-packages/torch/_dynamo/compiled_autograd.py in enable(compiler_fn) 497 @contextlib.contextmanager 498 def enable(compiler_fn): --> 499 prior = torch._C._dynamo.compiled_autograd.set_autograd_compiler( 500 functools.partial(AutogradCompilerInstance, compiler_fn) 501 )
RuntimeError: compiled_autograd.enable() requires no threads in backwards()
I have started facing error recently, were there any changes made to the libraries to cause this error? Any suggestions would be appreciated.
Upon running the gemma-2-2b example notebook of unsloth specifically after running the command "trainer_stats = trainer.train()" the below cited error keeps occurring.
RuntimeError Traceback (most recent call last) in <cell line: 2>()
1 torch._dynamo.config.compiled_autograd = False
----> 2 trainer_stats = trainer.train()
36 frames /usr/local/lib/python3.10/dist-packages/torch/_dynamo/compiled_autograd.py in enable(compiler_fn) 497 @contextlib.contextmanager 498 def enable(compiler_fn): --> 499 prior = torch._C._dynamo.compiled_autograd.set_autograd_compiler( 500 functools.partial(AutogradCompilerInstance, compiler_fn) 501 )
RuntimeError: compiled_autograd.enable() requires no threads in backwards()
I have started facing error recently, were there any changes made to the libraries to cause this error? Any suggestions would be appreciated.