🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
[ ] One of the scripts in the examples/ folder of Accelerate or an officially supported no_trainer script in the examples folder of the transformers repo (such as run_no_trainer_glue.py)
[ ] My own task or dataset (give details below)
Reproduction
Run the llama.py example in distributed inference folder.
Getting the following error
torch._dynamo.exc.UserError: Dynamic control flow is not supported at the moment. Please use functorch.experimental.control_flow.cond to explicitly capture the control flow.
For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#cond-operands
Expected behavior
I have tried using different accelerate launch flags such as --dynamo_use_dynamic, however I am not sure how to fix the above error.
System Info
Information
Tasks
no_trainer
script in theexamples
folder of thetransformers
repo (such asrun_no_trainer_glue.py
)Reproduction
Expected behavior
I have tried using different accelerate launch flags such as
--dynamo_use_dynamic
, however I am not sure how to fix the above error.