imankgoyal / NonDeepNetworks

Official Code for "Non-deep Networks"
BSD 3-Clause "New" or "Revised" License
586 stars 42 forks source link

How model parallelize across GPUs? #10

Open Gumpest opened 2 years ago

Gumpest commented 2 years ago

Could you introduce more details in parallelizing across GPUs, like how to implement through PyTorch.

RahulBhalley commented 2 years ago

You can use PyTorch Lightning instead. It automatically parallelizes the model training across GPUs and also supports TPU with just a single argument.

imankgoyal commented 2 years ago

We use the nccl backend with PyTorch to parallelize the stream while inference (testing). For training, we use usual distributed setup.

Gumpest commented 2 years ago

So we need at least three GPUs to inference three streams in ParNet?

imankgoyal commented 2 years ago

Yes if you want to do the multi-GPU inference. Otherwise, you can also do single gpu inference but it will be slower.

twmht commented 1 year ago

@imankgoyal

foe the edge devcie, using mulit-gpu for inference is expensive, what is your opinion?