ldkong1205 / PointCloud-C

Benchmarking and Analyzing Point Cloud Perception Robustness under Corruptions
https://pointcloud-c.github.io/home
164 stars 22 forks source link

about the shape of competition dataset(cls_extra_test_data.h) #5

Closed Roywangj closed 2 years ago

Roywangj commented 2 years ago

hi @ldkong1205 , i find the shape of the data in cls_extra_test_data.h is (24680, 724, 3). In dim=1, the number of points in point cloud(724), it's not a normal number(1024 or more) of point cloud, and 724 is not the same as the number of points in modelnet-c dataset (2468, 1024, 3). This may cause some confusion in training process(use 1024 points for training or 724 points).

Roywangj commented 2 years ago

Also, i want to know if you have different results on modelnet-c and cls_extra_test_data.h, is the result of cls_extra_test_data.h is much lower than that of modelnet-c. I have some confusing results the dataset of modelnet-c and cls_extra_test_data.h. When i testing my model on modelnet-c, i have 0.776 mOA, when i testing the former model on cls_extra_test_data.h, i have results of 0.651 Here is the results: 微信图片_20220803153102 微信图片_20220803153420

ldkong1205 commented 2 years ago

@Roywangj, thank you for your interest in the PointCloud-C Challenge!

For your questions:

"The number of points in the point cloud is 724, it's not a normal number (1024 or more) for a point cloud"

A: This is because we use atomic corruptions like Drop-Local and Drop-Global to build the robustness evaluation sets. The number of points in the original point clouds could be changed. Please refer to more details of this on our project page.

"This may cause some confusion in the training process (use 1024 points for training)"

A: This should not be a problem for most commonly used models. Recall that there is usually a max pooling operation inside the classifier module, hence the varying point numbers should be unified to a fixed number before the final output.

"Also, I want to know if you have different results on ModelNet-C and cls_extra_test_data.h, the result of cls_extra_test_data.h is much lower than that of ModelNet-C"

A: Yes, the results could be very different. For the PointCloud-C Challenge on CodaLab, the test data is much more challenging as they are generated by a combination of different corruptions, rather than the separated ones in ModelNet-C. We did this to differentiate the competition and the official benchmark.

Hope the above answer your questions. Please let us know if there is anything else.

ldkong1205 commented 2 years ago

By the way, please also note from this issue for some details in participating the PointCloud-C Challenge.

Roywangj commented 2 years ago

hi @ldkong1205 .

  1. I have tried testing the performance of RPC using the pretrained model on dataset cls_extra_test_data.h, i got very low results. here are the results on modelnet-c and cls_extra_test_data.h, separately.Here are the results. 微信图片_20220804094305

微信图片_20220804094649

  1. Would you please offer the base codes of testing model on cls_extra_test_data.h (that can reproduce the results of modelnet-c on PointCloud-C Challenge)? 微信截图_20220804094757
jiawei-ren commented 2 years ago
  1. I have tried testing the performance of RPC using the pretrained model on dataset cls_extra_test_data.h, i got very low results. here are the results on modelnet-c and cls_extra_test_data.h, separately.Here are the results.

I have evaluated the pretrained RPC on the challenge test set. The result should be 0.633995

  1. Would you please offer the base codes of testing model on cls_extra_test_data.h (that can reproduce the results of modelnet-c on PointCloud-C Challenge)?

Please see the example evaluation code for RPC here. The 0.797 OA on the leaderboard is obtained with the pretrained GDANet+WolfMix.

jiawei-ren commented 2 years ago

CodaLab page has been updated.

Roywangj commented 2 years ago

hi @jiawei-ren @ldkong1205 , thanks a lot for your patitent reply. Have a nice day!

Roywangj commented 2 years ago

hi @jiawei-ren , do you have the plan to release the codes of producing mixed corrupted data, just like the data in competition. I have only find the codes of creating corrputed data of all levels, while codes of producing mixex data not found. I want to produce some mixed data of my own for testing the further robetness of my model. Thanks

jiawei-ren commented 2 years ago

The code to produce the testing data for the challenge will remain private in order to keep the corruptions out of distribution.

Roywangj commented 2 years ago

Okay. Thanks for your reply.