Closed Roywangj closed 2 years ago
Also, i want to know if you have different results on modelnet-c and cls_extra_test_data.h, is the result of cls_extra_test_data.h is much lower than that of modelnet-c. I have some confusing results the dataset of modelnet-c and cls_extra_test_data.h. When i testing my model on modelnet-c, i have 0.776 mOA, when i testing the former model on cls_extra_test_data.h, i have results of 0.651 Here is the results:
@Roywangj, thank you for your interest in the PointCloud-C Challenge!
For your questions:
"The number of points in the point cloud is 724, it's not a normal number (1024 or more) for a point cloud"
A: This is because we use atomic corruptions like Drop-Local
and Drop-Global
to build the robustness evaluation sets. The number of points in the original point clouds could be changed. Please refer to more details of this on our project page.
"This may cause some confusion in the training process (use 1024 points for training)"
A: This should not be a problem for most commonly used models. Recall that there is usually a max pooling operation inside the classifier module, hence the varying point numbers should be unified to a fixed number before the final output.
"Also, I want to know if you have different results on ModelNet-C and cls_extra_test_data.h, the result of cls_extra_test_data.h is much lower than that of ModelNet-C"
A: Yes, the results could be very different. For the PointCloud-C Challenge on CodaLab, the test data is much more challenging as they are generated by a combination of different corruptions, rather than the separated ones in ModelNet-C. We did this to differentiate the competition and the official benchmark.
Hope the above answer your questions. Please let us know if there is anything else.
By the way, please also note from this issue for some details in participating the PointCloud-C Challenge.
hi @ldkong1205 .
- I have tried testing the performance of RPC using the pretrained model on dataset cls_extra_test_data.h, i got very low results. here are the results on modelnet-c and cls_extra_test_data.h, separately.Here are the results.
I have evaluated the pretrained RPC on the challenge test set. The result should be 0.633995
- Would you please offer the base codes of testing model on cls_extra_test_data.h (that can reproduce the results of modelnet-c on PointCloud-C Challenge)?
Please see the example evaluation code for RPC here. The 0.797 OA on the leaderboard is obtained with the pretrained GDANet+WolfMix.
CodaLab page has been updated.
hi @jiawei-ren @ldkong1205 , thanks a lot for your patitent reply. Have a nice day!
hi @jiawei-ren , do you have the plan to release the codes of producing mixed corrupted data, just like the data in competition. I have only find the codes of creating corrputed data of all levels, while codes of producing mixex data not found. I want to produce some mixed data of my own for testing the further robetness of my model. Thanks
The code to produce the testing data for the challenge will remain private in order to keep the corruptions out of distribution.
Okay. Thanks for your reply.
hi @ldkong1205 , i find the shape of the data in cls_extra_test_data.h is (24680, 724, 3). In dim=1, the number of points in point cloud(724), it's not a normal number(1024 or more) of point cloud, and 724 is not the same as the number of points in modelnet-c dataset (2468, 1024, 3). This may cause some confusion in training process(use 1024 points for training or 724 points).