Jun-CEN / Open-world-3D-semantic-segmentation

[ECCV 2022] Open-world Semantic Segmentation for LIDAR Point Clouds
67 stars 10 forks source link

关于所提供的checkpoints和模型复现结果的问题(Questions about the checkpoints and model reproduction results) #27

Open Kodapsy opened 10 months ago

Kodapsy commented 10 months ago

首先,关于您在https://drive.google.com/drive/folders/1GopqXwTen7jcq1q4tI0_BY20AEMVX4bN?usp=share_link此处提供的checkpoints,对于其中的semantickitti_incre.pt, 我在验证中发现它的参数并不能很好的通过from utils.load_save_util import load_checkpoint此函数加载,这个函数相信您并不陌生,这是否意味着您提供的incre阶段训练代码的config所选模型和您提供的checkpoints的模型结构有些不同呢?我根据您在github提供的验证增量学习的步骤,运行val_cylinder_asym_incre.py后得到:

image 276个参数集全都没有匹配,最后验证的结果也都是0%。 之后我还参照您在github中提供的步骤,训练了增量学习阶段的模型,对我自己训练的模型进行验证时,模型的加载如下图: image 可以看到294个参数集顺利加载。所以我想知道为什么我按照您的步骤训练出来的incre模型的参数量和您提供的并不一样,而且您提供的模型无法用load_checkpoint顺利加载呢?

26 我想问题26可能和我这个有些相似,不知您能否检查下相关模型和代码呢?您在问题26回答的参考

16 但是此问题您只是在回答中提供了checkpoints的链接,并未有人复现出论文的结果

最后,我想问一下关于您在论文中提供的这个增量学习模型的验证结果: image 我想问下这里表格中的mIoUnovel对于SemanticKitti数据集而言是指的other-vehicle还是bus呢? 我知道关于官方数据集本身有一个映射,您这里的novel类如果是other-vehicle的话,验证增量学习的代码不应该将验证结果保存后用您提供的semantickitti—api进行反映射再评估吗?但是您在val_cylinder_asym_incre.py中: image 将预测标签的保存注释掉了,并且没有设置保存scores的代码,我想问下这里是需要自行添加后再利用semantickitti—api评估吗?我参考您的其他验证代码自行修改了val_cylinder_asym_incre.py代码如下,如有错误还望您指出。 fa79af5104216b9a6e9aa6f815b974e8 如果您能回答以上问题我将不胜感激,祝您生活愉快!

luoyuchenmlcv commented 10 months ago

Hi, I tried to reproduce the paper result on NuScenes few months ago, I would say it is extremely difficult to reproduce the result by simply running the code. I have done few experiments and improvements, hope they could be helpful.

  1. set learning rate to 1e-5 to fine-tune a closed-set

  2. trace the AUPRC and AUROC every epoch when you fine-tune

  3. set the calibration term as 0 first to do outlier-exposure only. After finishing fine-tuning, set calibration term to 1e-3 ro continue fine-tuning and run very few epoch, or hundards of steps you can see a performance increase, if longer the performance will decrease. I did all 3 steps and AUPRC on NuScenes can achieve 16.3, while if I run the code directly I can only get AUPRC 4.5

  4. Try to modify the calibration loss, only the foreground objects should be "calibrated" from voxel_label_origin[voxel_label_origin == 17] = 0 to filter_clss = torch.tensor([11, 12, 13, 14, 16, 17]).cuda() voxel_label_origin[torch.isin(voxel_label_origin, filter_clss)] = 0

  5. Instead of using unknown synthesis (actuallty it is not very helpful and even trivial from my experiments), try real outliers, such as objects that ignored (nuscenes class 0).

  6. Always load the provided "ood_final" check point and run your modified code.

Finally I can achieve AUPRC=49, BUT I feel meaningless and lots of time are wasted to reproduce this paper. 000000

Kodapsy commented 10 months ago

Hi, I tried to reproduce the paper result on NuScenes few months ago, I would say it is extremely difficult to reproduce the result by simply running the code. I have done few experiments and improvements, hope they could be helpful.

  1. set learning rate to 1e-5 to fine-tune a closed-set
  2. trace the AUPRC and AUROC every epoch when you fine-tune
  3. set the calibration term as 0 first to do outlier-exposure only. After finishing fine-tuning, set calibration term to 1e-3 ro continue fine-tuning and run very few epoch, or hundards of steps you can see a performance increase, if longer the performance will decrease. I did all 3 steps and AUPRC on NuScenes can achieve 16.3, while if I run the code directly I can only get AUPRC 4.5
  4. Try to modify the calibration loss, only the foreground objects should be "calibrated" from voxel_label_origin[voxel_label_origin == 17] = 0 to filter_clss = torch.tensor([11, 12, 13, 14, 16, 17]).cuda() voxel_label_origin[torch.isin(voxel_label_origin, filter_clss)] = 0
  5. Instead of using unknown synthesis (actuallty it is not very helpful and even trivial from my experiments), try real outliers, such as objects that ignored (nuscenes class 0).
  6. Always load the provided "ood_final" check point and run your modified code.

Finally I can achieve AUPRC=49, BUT I feel meaningless and lots of time are wasted to reproduce this paper. 000000

Hello, I am very glad to receive your reply, and thank you for sharing the process of paper reproduction. I have been reproducing the scemantic kitti dataset before, and recently I was testing the code of the nuscenes dataset, and I encountered the problem as shown in the following figure. Do you have any ideas? 屏幕截图 2023-09-11 160023 The complete error message is as follows. Do you have this problem when working with the nuscenes data set? AssertionError: available class: {'voxel_dataset': <class 'dataloader.dataset_semantickitti.voxel_dataset'>, 'cylinder_dataset': <class 'dataloader.dataset_semantickitti.cylinder_dataset'>, 'cylinder_dataset_test': <class 'dataloader.dataset_semantickitti.cylinder_dataset_test'>, 'cylinder_dataset_panop': <class 'dataloader.dataset_semantickitti.cylinder_dataset_panop'>, 'cylinder_dataset_panop_incre': <class 'dataloader.dataset_semantickitti.cylinder_dataset_panop_incre'>, 'polar_dataset': <class 'dataloader.dataset_semantickitti.polar_dataset'>}

luoyuchenmlcv commented 10 months ago

No, but this should be easy to take some time debug.

Daniellli commented 5 months ago

Hi, I tried to reproduce the paper result on NuScenes few months ago, I would say it is extremely difficult to reproduce the result by simply running the code. I have done few experiments and improvements, hope they could be helpful.

  1. set learning rate to 1e-5 to fine-tune a closed-set
  2. trace the AUPRC and AUROC every epoch when you fine-tune
  3. set the calibration term as 0 first to do outlier-exposure only. After finishing fine-tuning, set calibration term to 1e-3 ro continue fine-tuning and run very few epoch, or hundards of steps you can see a performance increase, if longer the performance will decrease. I did all 3 steps and AUPRC on NuScenes can achieve 16.3, while if I run the code directly I can only get AUPRC 4.5
  4. Try to modify the calibration loss, only the foreground objects should be "calibrated" from voxel_label_origin[voxel_label_origin == 17] = 0 to filter_clss = torch.tensor([11, 12, 13, 14, 16, 17]).cuda() voxel_label_origin[torch.isin(voxel_label_origin, filter_clss)] = 0
  5. Instead of using unknown synthesis (actuallty it is not very helpful and even trivial from my experiments), try real outliers, such as objects that ignored (nuscenes class 0).
  6. Always load the provided "ood_final" check point and run your modified code.

Finally I can achieve AUPRC=49, BUT I feel meaningless and lots of time are wasted to reproduce this paper. 000000

Hi, thank you for sharing the process of reproducing the result. and do you really reproduce 49 AUPR on NusScene ? it is only 21.2 AUPR in NuScenes in the paper?

luoyuchenmlcv commented 5 months ago

Hi, I tried to reproduce the paper result on NuScenes few months ago, I would say it is extremely difficult to reproduce the result by simply running the code. I have done few experiments and improvements, hope they could be helpful.

  1. set learning rate to 1e-5 to fine-tune a closed-set
  2. trace the AUPRC and AUROC every epoch when you fine-tune
  3. set the calibration term as 0 first to do outlier-exposure only. After finishing fine-tuning, set calibration term to 1e-3 ro continue fine-tuning and run very few epoch, or hundards of steps you can see a performance increase, if longer the performance will decrease. I did all 3 steps and AUPRC on NuScenes can achieve 16.3, while if I run the code directly I can only get AUPRC 4.5
  4. Try to modify the calibration loss, only the foreground objects should be "calibrated" from voxel_label_origin[voxel_label_origin == 17] = 0 to filter_clss = torch.tensor([11, 12, 13, 14, 16, 17]).cuda() voxel_label_origin[torch.isin(voxel_label_origin, filter_clss)] = 0
  5. Instead of using unknown synthesis (actuallty it is not very helpful and even trivial from my experiments), try real outliers, such as objects that ignored (nuscenes class 0).
  6. Always load the provided "ood_final" check point and run your modified code.

Finally I can achieve AUPRC=49, BUT I feel meaningless and lots of time are wasted to reproduce this paper. 000000

Hi, thank you for sharing the process of reproducing the result. and do you really reproduce 49 AUPR on NusScene ? it is only 21.2 AUPR in NuScenes in the paper?

I cannnot reproduce the paper result by purely running the code, hence i guess the result is picked. Hence I attempts some modifications to try to reach the paper result in a non picky way.

for aupr 49, I load his checkpoint and retrain on my modifications. If not loading the checkpoint, and following my modification, I get aupr 23. I think unknown sythesis is not very useful to be honest, since resizing should be a feature invariance transform, and after experiments, using this or not does not have too much difference.The provided code also resize the object with class 0, which is not in the testing class, and this part provides the supervision signal for OOD in reality.

daniel-bogdoll commented 1 month ago

@luoyuchenmlcv Could you release a fork with the changes you made? As they seem quite significant, this would be interesting!