Open dhgras opened 2 months ago
Hi, here are some suggestions:
Add validation set to the training set for submission.
Train three models separately and ensemble the predicted logic to one prediction (A common trick for perception challenges, I think it was initially introduced to ScanNet benchmark by Stratified Transformer. But remember, don't use it to report model validation performance.)
For ScanNet, it may be a bit of cheating, using over-segmentation, which was also used in ScanNet annotation. (I think this trick was initially introduced by Mix3D, and then Mask3D and Swin3D etc, also use a similar trick. We have no choice, but also, please don't use it to report validation performance. I hope it can keep pure.
I will write a document about step-by-step benchmark instruction later (When I have some time).
Hi, here are some suggestions:
- Add validation set to the training set for submission.
- Train three models separately and ensemble the predicted logic to one prediction (A common trick for perception challenges, I think it was initially introduced to ScanNet benchmark by Stratified Transformer. But remember, don't use it to report model validation performance.)
- For ScanNet, it may be a bit of cheating, using over-segmentation, which was also used in ScanNet annotation. (I think this trick was initially introduced by Mix3D, and then Mask3D and Swin3D etc, also use a similar trick. We have no choice, but also, please don't use it to report validation performance. I hope it can keep pure.
I will write a document about step-by-step benchmark instruction later (When I have some time).
Thanks for your reply! Regarding the second suggestion, do you mean to train three models with different training parameters? Then how to integrate the prediction logic into one prediction?
Hi, here are some suggestions:
- Add validation set to the training set for submission.
- Train three models separately and ensemble the predicted logic to one prediction (A common trick for perception challenges, I think it was initially introduced to ScanNet benchmark by Stratified Transformer. But remember, don't use it to report model validation performance.)
- For ScanNet, it may be a bit of cheating, using over-segmentation, which was also used in ScanNet annotation. (I think this trick was initially introduced by Mix3D, and then Mask3D and Swin3D etc, also use a similar trick. We have no choice, but also, please don't use it to report validation performance. I hope it can keep pure.
I will write a document about step-by-step benchmark instruction later (When I have some time).
Thanks for your reply! Regarding the second suggestion, do you mean to train three models with different training parameters? Then how to integrate the prediction logic into one prediction?
Sample parameters. Sometimes, have an effect.
Kindly remind anyone who reads these instructions: we do not use any trick for validation results. Also, please don't involve these tricks in validating benchmarks.
Hi, thank you for your excellent work. What I want to know is how to train on ScanNet200 to achieve the following result?![image](https://github.com/Pointcept/PointTransformerV3/assets/42953352/06e93a4a-fe2b-442c-a3bf-f4975c526f36)