HuguesTHOMAS / KPConv

Kernel Point Convolutions
MIT License
694 stars 128 forks source link

question about testing NPM3D dataset by using test_any_model.py #49

Closed amiltonwong closed 4 years ago

amiltonwong commented 4 years ago

Hi, @HuguesTHOMAS ,

I had followed pretrained model guide. I had downloaded the provided model for NPM3D dataset, and modify test_any_model.py as :

chosen_log = '/data/code11/KPConv/trained_models/Log_pretrained_NPM3D'

And then run this command: python test_any_model.py. However, I got the the model training again and finished after epoch 269 The log is as follows:

Epoch 269, step 222 (timings : 621.59 12.76). min potential = 100.7
Epoch 269, step 224 (timings : 617.27 12.40). min potential = 100.7
Epoch 269, step 226 (timings : 615.46 12.10). min potential = 100.7
Epoch 269, step 228 (timings : 611.76 11.81). min potential = 100.7
Epoch 269, step 230 (timings : 604.60 11.50). min potential = 100.7
Epoch 269, step 232 (timings : 599.63 11.47). min potential = 100.7
Epoch 269, step 234 (timings : 598.45 11.32). min potential = 100.7
Epoch 269, step 236 (timings : 602.64 11.45). min potential = 100.7
Epoch 269, end. Min potential = 100.7
[114.3945913653916, 114.18811528408176, 113.78857637790165]
Saving clouds

Reproject Vote #100
Done in 339.9 s

How could I get the direct testing result for NPM3D dataset by modifying test_any_model.py?

THX!

HuguesTHOMAS commented 4 years ago

HI @amiltonwong,

You did not get the model training again, this is a log of the testing script.

The epochs are just fake so just ignore them. You can see the min potential, which indicates that every location of the dataset has been tested by at least 100 different input test spheres. The final result for a point is the average prediction over all test spheres.

If you go into the folder named /data/code11/KPConv/test/Log_pretrained_NPM3D, you should find test predictions in ply format.

Best, Hugues

vvaibhav08 commented 4 years ago

Hi @HuguesTHOMAS ,

First of all, I would like to thank you for sharing this amazing work. So far I've tested KPConv on several datasets with different kinds of features (RGB/intensity) and it performs astonishingly well on all.

I was wondering about the optimum value for the number of votes while testing. How much of a difference do you think testing with high num_votes (say 100) and thus high minimum potential will there be compared to low num_votes (say 20) and thus low minimum potential? It would be great if you could elaborate on your above-mentioned point. Qualitatively I did not notice big differences. Even the clouds saved for the first time produce really good results.

Best, Vaibhav

HuguesTHOMAS commented 4 years ago

Hi @vvaibhav08,

Thank for your interest in KPConv. I am happy to see that it performs well on your tasks.

Let's elaborate a bit more. At test time, I want the network to test every part of the scene in a regular manner. If I took, random point in the scene, high density areas would be picked more and thus tested more times, which is a waste of time.

The solution I chose, is to assign a value at each point of the dataset that I call potential. Every time the network tests a sphere, I increment the potential value in this sphere so that we know this area has already been tested. The next spheres are chosen where the potential is the lowest, so that every part of the dataset will be picked approximately the same amount of time. Furthermore, the potential are incremented with Gaussian-like function (high in the center, and decreasing with distance), which means that the next time the network will have to test a sphere in the same area, it will not be chosen with the same center. This helps the diversity of test spheres. A given point will be tested by different spheres.

Eventually the result probabilities at a given point are computed as the average of the probabilities that this point got from different spheres. This is a voting scheme that helps the results.

In my own experiments, you can see a difference between 1 vote and 20 votes, but between 20 and 100, it will likely be the same results. The more votes you add, the less random are the results.

I hope I was clear enough. Best, Hugues

vvaibhav08 commented 4 years ago

Hi @vvaibhav08,

Thank for your interest in KPConv. I am happy to see that it performs well on your tasks.

Let's elaborate a bit more. At test time, I want the network to test every part of the scene in a regular manner. If I took, random point in the scene, high density areas would be picked more and thus tested more times, which is a waste of time.

The solution I chose, is to assign a value at each point of the dataset that I call potential. Every time the network tests a sphere, I increment the potential value in this sphere so that we know this area has already been tested. The next spheres are chosen where the potential is the lowest, so that every part of the dataset will be picked approximately the same amount of time. Furthermore, the potential are incremented with Gaussian-like function (high in the center, and decreasing with distance), which means that the next time the network will have to test a sphere in the same area, it will not be chosen with the same center. This helps the diversity of test spheres. A given point will be tested by different spheres.

Eventually the result probabilities at a given point are computed as the average of the probabilities that this point got from different spheres. This is a voting scheme that helps the results.

In my own experiments, you can see a difference between 1 vote and 20 votes, but between 20 and 100, it will likely be the same results. The more votes you add, the less random are the results.

I hope I was clear enough. Best, Hugues

Yes, this helps a lot. Thanks