Closed Andyzzz closed 9 months ago
You can try wrapping train.py with accelerate to avoid torchrun and still get faster training and less memory
You can try wrapping train.py with accelerate to avoid torchrun and still get faster training and less memory