Open liamlin5566 opened 2 months ago
Hi, thanks for sharing your great work! I find that your mIoU of testing set is higher than validation set on SemantickITTI and nuScenes. I wonder whether you finetune your model with training and validation set? (There are some works like UniSeg training the model with validation.)
Thanks.
Hello, I would like to ask if the results of the validation set of semantickitti that you run can reach the level of the paper (70.8%)? I've run it many times and it's only 68% at the highest
@Sylva-Lin Hi, I have not run the code due to the limitation of computation resource. I just wonder why the mIoU of validation set is lower than testing set shown in their paper.
I actually ran SemanticKITTI and submitted the test set results online with only 66% accuracy compared to 74% for the paper, and I was curious how the authors managed to reach this height
---- Replied Message ---- | From | @.> | | Date | 09/04/2024 17:19 | | To | @.> | | Cc | Zhenchao @.>@.> | | Subject | Re: [Pointcept/PointTransformerV3] Why performance for validation is higher than testing set on SemantickITTI and nuScenes (Issue #91) |
@Sylva-Lin Hi, I have not run the code due to the limitation of computation resource. I just wonder why the mIoU of validation set is lower than testing set shown in their paper.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
I actually ran SemanticKITTI and submitted the test set results online with only 66% accuracy compared to 74% for the paper, and I was curious how the authors managed to reach this height
I didn't find their config in PTV3 on semanticKITTI: https://github.com/Pointcept/Pointcept/tree/main/configs/semantic_kitti. How did you set the config? (adapt from Waymo one?)
I actually ran SemanticKITTI and submitted the test set results online with only 66% accuracy compared to 74% for the paper, and I was curious how the authors managed to reach this height
I didn't find their config in PTV3 on semanticKITTI: https://github.com/Pointcept/Pointcept/tree/main/configs/semantic_kitti. How did you set the config? (adapt from Waymo one?)
I'm referring to the config of PTV2, Nuscenes, and Waymo
I actually ran SemanticKITTI and submitted the test set results online with only 66% accuracy compared to 74% for the paper, and I was curious how the authors managed to reach this height
I didn't find their config in PTV3 on semanticKITTI: https://github.com/Pointcept/Pointcept/tree/main/configs/semantic_kitti. How did you set the config? (adapt from Waymo one?)
I'm referring to the config of PTV2, Nuscenes, and Waymo
Hi, are you willing to share the configuration of semanticKITTI?
@Sylva-Lin Hi, I have not run the code due to the limitation of computation resource. I just wonder why the mIoU of validation set is lower than testing set shown in their paper.
That's because there are two categories in the validation set that are not available.
Hi, thanks for sharing your great work! I find that your mIoU of testing set is higher than validation set on SemantickITTI and nuScenes in your paper. I wonder whether you finetune your model with training and validation set? (There are some works like UniSeg training the model with validation.)
Thanks.
Hi, yes, for all testing, we trained with train and Val split and used the last weight for submission.
I actually ran SemanticKITTI and submitted the test set results online with only 66% accuracy compared to 74% for the paper, and I was curious how the authors managed to reach this height … ---- Replied Message ---- | From | @.> | | Date | 09/04/2024 17:19 | | To | @.> | | Cc | Zhenchao @.>@.> | | Subject | Re: [Pointcept/PointTransformerV3] Why performance for validation is higher than testing set on SemantickITTI and nuScenes (Issue #91) | @Sylva-Lin Hi, I have not run the code due to the limitation of computation resource. I just wonder why the mIoU of validation set is lower than testing set shown in their paper. — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
Sorry, too busy with a new project, wait until the CVPR deadline.
Hi, thanks for sharing your great work! I find that your mIoU of testing set is higher than validation set on SemantickITTI and nuScenes in your paper. I wonder whether you finetune your model with training and validation set? (There are some works like UniSeg training the model with validation.) Thanks.
Hi, yes, for all testing, we trained with train and Val split and used the last weight for submission.
@Gofinge Thanks for replying, I have another question about the improvement of training with val and training set. Can provide the mIoU scores on SemanticKITTI and nuScenes testing set without adding validation set (only trained with training set) ? Thanks!
作者有测试的代码,至于针对kitti的分数作者只在论文里给出来了,目前好像还没人能达到他论文里的效果
发件人: liamlin5566 @.> 发送时间: 2024年9月11日 5:42 收件人: Pointcept/PointTransformerV3 @.> 抄送: lileai_李磊 @.>; Comment @.> 主题: Re: [Pointcept/PointTransformerV3] Why performance for validation is higher than testing set on SemantickITTI and nuScenes (Issue #91)
Hi, thanks for sharing your great work! I find that your mIoU of testing set is higher than validation set on SemantickITTI and nuScenes in your paper. I wonder whether you finetune your model with training and validation set? (There are some works like UniSeg training the model with validation.) Thanks.
Hi, yes, for all testing, we trained with train and Val split and used the last weight for submission.
Thanks for replying, I have another question about the improvement of training with val and training set. Can provide the mIoU scores on SemanticKITTI and nuScenes testing set without adding validation set (only trained with training set) ? Thanks!
― Reply to this email directly, view it on GitHubhttps://github.com/Pointcept/PointTransformerV3/issues/91#issuecomment-2342060828, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AMDDMSOUBX36ZUTCICKLAGLZV5RWDAVCNFSM6AAAAABNRJALSSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNBSGA3DAOBSHA. You are receiving this because you commented.Message ID: @.***>
作者有测试的代码,至于针对kitti的分数作者只在论文里给出来了,目前好像还没人能达到他论文里的效果 … ____ 发件人: liamlin5566 @.> 发送时间: 2024年9月11日 5:42 收件人: Pointcept/PointTransformerV3 @.> 抄送: lileai_李磊 @.>; Comment @.> 主题: Re: [Pointcept/PointTransformerV3] Why performance for validation is higher than testing set on SemantickITTI and nuScenes (Issue #91) Hi, thanks for sharing your great work! I find that your mIoU of testing set is higher than validation set on SemantickITTI and nuScenes in your paper. I wonder whether you finetune your model with training and validation set? (There are some works like UniSeg training the model with validation.) Thanks. Hi, yes, for all testing, we trained with train and Val split and used the last weight for submission. Thanks for replying, I have another question about the improvement of training with val and training set. Can provide the mIoU scores on SemanticKITTI and nuScenes testing set without adding validation set (only trained with training set) ? Thanks! ― Reply to this email directly, view it on GitHub<#91 (comment)>, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AMDDMSOUBX36ZUTCICKLAGLZV5RWDAVCNFSM6AAAAABNRJALSSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNBSGA3DAOBSHA. You are receiving this because you commented.Message ID: @.***>
哥,代码库这么多config,多研究研究不难跑的,必要的东西我全部开源了。
Hi, thanks for sharing your great work! I find that your mIoU of testing set is higher than validation set on SemantickITTI and nuScenes in your paper. I wonder whether you finetune your model with training and validation set? (There are some works like UniSeg training the model with validation.) Thanks.
Hi, yes, for all testing, we trained with train and Val split and used the last weight for submission.
@Gofinge Thanks for replying, I have another question about the improvement of training with val and training set. Can provide the mIoU scores on SemanticKITTI and nuScenes testing set without adding validation set (only trained with training set) ? Thanks!
Hi, I remember I discussed all the technology needed for the testing benchmark in one issue in Pointcept. But a long time ago, I didn't remember which one.
@Gofinge Thanks for answering. I will check the issue in Pointcept.
作者有测试的代码,至于针对kitti的分数作者只在论文里给出来了,目前好像还没人能达到他论文里的效果 … ____ 发件人: liamlin5566 @.**> 发送时间: 2024年9月11日 5:42 收件人: Pointcept/PointTransformerV3 @.**> 抄送: lileai李磊 @.>; Comment _@_.> 主题: Re: [Pointcept/PointTransformerV3] Why performance for validation is higher than testing set on SemantickITTI and nuScenes (Issue #91) Hi, thanks for sharing your great work! I find that your mIoU of testing set is higher than validation set on SemantickITTI and nuScenes in your paper. I wonder whether you finetune your model with training and validation set? (There are some works like UniSeg training the model with validation.) Thanks. Hi, yes, for all testing, we trained with train and Val split and used the last weight for submission. Thanks for replying, I have another question about the improvement of training with val and training set. Can provide the mIoU scores on SemanticKITTI and nuScenes testing set without adding validation set (only trained with training set) ? Thanks! ― Reply to this email directly, view it on GitHub<#91 (comment)>, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AMDDMSOUBX36ZUTCICKLAGLZV5RWDAVCNFSM6AAAAABNRJALSSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNBSGA3DAOBSHA. You are receiving this because you commented.Message ID: @_.***>
哥,代码库这么多config,多研究研究不难跑的,必要的东西我全部开源了。
你一直不公布这个,就一直会有人问,那个issue下面跟了好几条了,你都不用跑,把config文件更新一下就没人说了
Hi, thanks for sharing your great work! I find that your mIoU of testing set is higher than validation set on SemantickITTI and nuScenes in your paper. I wonder whether you finetune your model with training and validation set? (There are some works like UniSeg training the model with validation.)
Thanks.