Closed sgiraudot closed 6 years ago
Hi,
For the Stanford dataset, every room is processed in a single network pass which means the batch size needs to be sufficient to fit the full room. During training if the scan does not fit in the batch it is simply discarded (i.e. not used for training). For evaluation this is not possible so your batch size really needs to be large enough. What GPU are you using? 200000 points is actually not that much.
The GPU is a Nvidia GeForce GT 750M.
Does this mean that, if I want to test the program on my own data sets, I need to make sure that each input file I'm providing is smaller than the batch size?
Does this mean that, if I want to test the program on my own data sets, I need to make sure that each input file I'm providing is smaller than the batch size?
Not necessarily. This is the default regime for the indoor datasets (Stanford and ScanNet) but you can also sample random chunks from your scans during training which is done for the outdoor dataset (Semantic 3D). At test time, the scans are labeled using a sliding window. You can check out the Semantic3D experiments.
Okay, thank you for that clarification, I understand better how it works now.
I think you can probably close the issue as my main concern was to be limited on other data sets. Thanks again!
Great. Let me know if other concerns pop up.
Hello,
I've been trying to test your program, using the Stanford dataset. During the training phase, I encountered a memory issue (not enough memory), so I tried to decrease the value of
tt_batch_size
(from200000
to5000
, which might be overly small, but it was for the sake of the experiment). Training went well and finished without error, but when I am launching the test program, I get the following error:It seems that the batch size is too small. Is there a limit to this batch size? How can I know it?
Thanks in advance