Open robertsenputras opened 2 years ago
Thanks for your question!
I just used the number of samples, 16384, to train the network on our GPU resources. You can increase the sample size if you have enough GPU memory.
Also, I chose a smaller number to sample the data randomly from the larger number of samples. Sampling the data in continuous space might be ideal, but it is quite difficult to collect the continuous samples. So, this project selected the sample data in a huge size of the data pool, instead of using the ideal random sample.
I guess that increasing the number of samples could make a better output, but be careful to overfit the network to a large number of samples.
I hope my answer is helpful to you! Thanks.
Hi Youngsun Kwon,
I have a question regarding the training process. I saw from your config here: iln_1d.yaml you set the num_of_samples is equal to 16384. When I check inside the train_models.py and print the shape of the input, output, and pred images all of them has this shape: [b, 16384]. I was thinking why the output has the same shape with the input while the output should have 128x2048 points, right ? Does it mean the model only learn from limited random data from sampling instead of all of the data?
Will the training process be better if I increase the num_of_samples to the number of output pointcloud?
I hope hearing from you soon. Thanks