🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
[ ] One of the scripts in the examples/ folder of Accelerate or an officially supported no_trainer script in the examples folder of the transformers repo (such as run_no_trainer_glue.py)
System Info
https://github.com/huggingface/accelerate/blob/3fcc9461c4fcb7228df5e5246809ba09cfbb232e/src/accelerate/hooks.py#L439 Should we pass preload_module_classes to the next calling? such as
Information
Tasks
no_trainer
script in theexamples
folder of thetransformers
repo (such asrun_no_trainer_glue.py
)Reproduction
Step into the code is enough.
Expected behavior
Identify the bug.