A general representation model across vision, audio, language modalities. Paper: ONE-PEACE: Exploring One General Representation Model Toward Unlimited Modalities
Apache License 2.0
981
stars
64
forks
source link
Unable to train the algorithms with 2 GPU instances (multi-node), each with 4 A100s #47
The paper says the algorithm has been trained with 8 A100 GPUs.
I am having two instances, each equipped with 4 A100s instead of one GPU instance with 8 A100 GPUs.
Is there any way to specify the instances in the configurations?
The paper says the algorithm has been trained with 8 A100 GPUs. I am having two instances, each equipped with 4 A100s instead of one GPU instance with 8 A100 GPUs. Is there any way to specify the instances in the configurations?
In another word, where can I specify the number of nodes in the code? https://lightning.ai/docs/pytorch/stable/common/trainer.html#num-nodes https://pytorch.org/docs/stable/elastic/run.html https://lambdalabs.com/blog/multi-node-pytorch-distributed-training-guide
I would do appreciate if you could give a comment on these.