ashawkey / torch-ngp

A pytorch CUDA extension implementation of instant-ngp (sdf and nerf), with a GUI.
MIT License
2.11k stars 275 forks source link

Can it run with torch.distributed, for example,i want to run with torch.distributed on 8 x V100s,how can i solve it or add some codes? #118

Open wacyfdyy opened 2 years ago

ashawkey commented 2 years ago

@wacyfdyy Sorry that this is not implemented. Maybe you would like to check ngp_pl, which supports parallel training.

wacyfdyy commented 2 years ago

@wacyfdyy Sorry that this is not implemented. Maybe you would like to check ngp_pl, which supports parallel training. thanku for your answer。Now i am trying to add DDP codes.

Kartik-Teotia commented 2 years ago

Can the non-cuda ray support multi-gpu training @ashawkey ?