lucidrains / meshgpt-pytorch

Implementation of MeshGPT, SOTA Mesh generation using Attention, in Pytorch
MIT License
667 stars 57 forks source link

distirbution training #70

Open kingofrubbish2 opened 4 months ago

kingofrubbish2 commented 4 months ago

Hello, first of all thank you for sharing. I have a question for you. How to use your code to achieve stand-alone multi-gpu distributed training ah. Currently, with your code, there is only one gpu doing the calculation

Anthoney commented 2 months ago

accelerate launch --config_file config.yaml yourtraining.py"

config.yaml:

compute_environment: LOCAL_MACHINE
distributed_type: MULTI_GPU
downcast_bf16: 'no'
gpu_ids: all
mixed_precision: 'no'
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: false
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false