princeton-computational-imaging / NLOSFeatureEmbeddings

30 stars 10 forks source link

Batch_size>1 breaks lct layers #3

Open grau4 opened 3 years ago

grau4 commented 3 years ago

Hi there!

I've tried to train/evaluate the model defined in deepVoxel.py on my own data, but it either fails assert statements or concat calls within the lct/fk/phasor layers whenever the batch size is bigger than 1. The same happens when running the standalone examples of these modules, provided in "utils_pytorch/". Is there a hack on how to reshape the inputs in order to process large batches? Or is the implementation inherently limited to process inputs one-by-one?

Thank you! Best

wenzhengchen commented 3 years ago

Hi,

Actually, we do notice that LCT layer will take a lot of memory. In our experiment, we use a NVIDIA 24G gpu and set batch_size as 3 or 4(I cannot remember clearly). If you use a small memory GPU you may restrict your bs as 1.

To increase, 1) change to a big memory GPU(the easiest way.... LOL) 2) do not use 128 feature resolution. Instead, use feature resolution 64, or 32, but it will lose details.

syjjsy commented 2 years ago

Hi there!

I've tried to train/evaluate the model defined in deepVoxel.py on my own data, but it either fails assert statements or concat calls within the lct/fk/phasor layers whenever the batch size is bigger than 1. The same happens when running the standalone examples of these modules, provided in "utils_pytorch/". Is there a hack on how to reshape the inputs in order to process large batches? Or is the implementation inherently limited to process inputs one-by-one?

Thank you! Best

Hi, I am also trying to train this model by myself, but I seem to have met some problems at present. Could you share some your program? Thank You