Open tfiliano opened 8 months ago
As I know - they must rewrite cuda code part for it. Training part for now isn't bad, Only reason for rewrite code is very big datasets, but gaussians can be trained only from few pics.... so duno if authors do that. ;)
could Dask be possibly integrated to distribute the job?
241
Yeah I have spotted that one, I can try to implement it, but Python, Cuda, etc is not my world
could Dask be possibly integrated to distribute the job?
I was looking at this https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html Once the project already uses Pytorch, I was thinking that it could be one possibility.
But yeah.. I will need some time reading the source code and trying to understand it to see how and where to apply this nn.parallel
This project enables distributed gaussian splatting training over multiple GPU: https://github.com/nyu-systems/Grendel-GS
Do you know to choose GPU? The default is to run on GPU 0, but I want it to run on GPU 1
Is there any way to do it?
someone could point the way I need to follow to do it?
Thanks