Do you wish to optimize your script with torch dynamo?[yes/NO]:yes
---------------------------------------------------------------------------------------------------------Which dynamo backend would you like to use?
Please select a choice using the arrow or number keys, and selecting with enter
eager
aot_eager
➔ inductor
nvfuser
aot_nvfuser
aot_cudagraphs
ofi
fx2trt
onnxrt
ipex
When selecting this in
accelerate config
:The LORA training script errors out with:
(While the same arguments work with TorchDynamo disabled.
Maybe
torch.compile()
needs to be added conditionally and manually, instead of automatically with accelerate?