Open kingofrubbish2 opened 4 months ago
accelerate launch --config_file config.yaml yourtraining.py"
config.yaml:
compute_environment: LOCAL_MACHINE
distributed_type: MULTI_GPU
downcast_bf16: 'no'
gpu_ids: all
mixed_precision: 'no'
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: false
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
Hello, first of all thank you for sharing. I have a question for you. How to use your code to achieve stand-alone multi-gpu distributed training ah. Currently, with your code, there is only one gpu doing the calculation