alievk / npbg

Neural Point-Based Graphics
MIT License
324 stars 51 forks source link

GPU required for training? #3

Closed flaime-ai closed 4 years ago

flaime-ai commented 4 years ago

I am trying to train a new scene and am running into memory issues with training.

I have tried with a single Titan RTX (24GB) card, and a multi GPU setup (4 x Tesla T4, each 16GB) With both I am receiving a CUDA out of memory error on the first epoch of training.

I was wondering which GPU set ups you used for training? As I would assume these should be sufficiently big to train the model.

seva100 commented 4 years ago

We used GeForce GTX 1080 Ti with less than 12 GB GPU memory, which should be more than enough for training in most cases. Not sure where your issue comes from; perhaps you can try decreasing batch size if it's more than 1 in your experiment (--batch_size parameter of train.py). Does it happen during running the example from the readme?

flaime-ai commented 4 years ago

Thanks - Okay, there must be something else up. I can get both the examples running with no problems. I'll try decreasing the batch size. Also, it may be to do with the point cloud generated from agisoft, I didn't optimise that.