Closed mees closed 7 years ago
As being said, for different runs the results might be different. You are using the encoder from RUN1 but voxel generator from RUN2. That's why you got only 0.16 IOU. Please follow the same step for PTN-comb. On Wed, Jun 7, 2017 at 6:42 AM Oier Mees notifications@github.com wrote:
Hello, after training the encoder(CNN-Vol) and the perspective transformer (PTN-Proj) I test the final model by changing the lines: `base_loader = torch.load(opt.checkpoint_dir .. 'arch_PTN_singleclass_nv24_adam1_bs6_nz512_wd0.001_lbg(0,1)_ks24_vs32/net-epoch-100.t7') encoder = base_loader.encoder base_voxel_dec = base_loader.voxel_dec
unsup_loader = torch.load(opt.checkpoint_dir .. 'arch_PTN_singleclass_nv24_adam1_bs6_nz512_wd0.001_lbg(1,0)_ks24_vs32/net-epoch-100.t7') unsup_voxel_dec = unsup_loader.voxel_dec
sup_loader = torch.load(opt.checkpoint_dir .. 'ptn_comb.t7') sup_voxel_dec = sup_loader.voxel_dec`
The results on the testset are: cat [chair]: CNN-VOL IOU = 0.459553 PTN-COMB IOU = 0.162989 PTN-PROJ IOU = 0.472389 which are 4 to 5 points lower than reported in the paper. What could be the reasons for it? Also I noticed the pretrained ptn-comb model has some problems (0.16) when evaluated with my encoder (instead of the pretrained encoder). What is the reason for this?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/xcyan/nips16_PTN/issues/4, or mute the thread https://github.com/notifications/unsubscribe-auth/AIWuK6jjdQ4-czMZBMwt0VU32u_UZX8Jks5sBqi9gaJpZM4NyuQ_ .
Well then what is the standard deviation you computed for your models? and is the 0.5027 for PTN-Proj the mean or your best run?
There might be other reasons for getting such big differences for PTN-proj/CNN-vol. But would you please first train PTN-comb following the steps described in the directory. If it does not resolve the issue you had (only 0.16 IOU), please let me know. On Wed, Jun 7, 2017 at 7:15 AM Oier Mees notifications@github.com wrote:
Well then what is the standard deviation you computed for your models? and is the 0.5027 for PTN-Proj the mean or your best run?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/xcyan/nips16_PTN/issues/4#issuecomment-306808274, or mute the thread https://github.com/notifications/unsubscribe-auth/AIWuKzEqmudkCBS1iHIeJRTLjq2d4futks5sBrCIgaJpZM4NyuQ_ .
Yeah I can train the PTN-comb, but I am more interested on what the other reasons could be for the differences in PTN-proj/CNN-vol. What do you guess?
(1) The pertaining setup is not optimized; (2) The fine tuning setup is not optimized;
I think (1) does not matter too much, but could be some issue in (2). Would you please run the fine tuning for another few epochs and see if the results differs a bit? If there is a bug in finetuning setup, we will fix it. On Wed, Jun 7, 2017 at 7:24 AM Oier Mees notifications@github.com wrote:
Yeah I can train the PTN-comb, but I am more interested on what the other reasons could be for the differences in PTN-proj/CNN-vol. What do you guess?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/xcyan/nips16_PTN/issues/4#issuecomment-306810868, or mute the thread https://github.com/notifications/unsubscribe-auth/AIWuK9Kjv0Y7tGK8JfaE_3EzRW12D3fyks5sBrKRgaJpZM4NyuQ_ .
Sure, can you tell where to make the changes for it? I am new to torch and still trying to figure out your code ;)
In the training script, simply increase the number of epochs. On Wed, Jun 7, 2017 at 7:38 AM Oier Mees notifications@github.com wrote:
Sure, can you tell where to make the changes for it? I am new to torch and still trying to figure out your code ;)
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/xcyan/nips16_PTN/issues/4#issuecomment-306815234, or mute the thread https://github.com/notifications/unsubscribe-auth/AIWuK9LGmGt6UAO1MWKpT7rlimZ3_9qDks5sBrXegaJpZM4NyuQ_ .
Training PT-Proj for 40 more epochs (--niter 140) increased the IOU to 0.480322 (0.08 improvement)
@mees Thank you for letting us know! At least the trend is similar, we will fix this issue.
Hello, after training the encoder(CNN-Vol) and the perspective transformer (PTN-Proj) I test the final model by changing the lines: `base_loader = torch.load(opt.checkpoint_dir .. 'arch_PTN_singleclass_nv24_adam1_bs6_nz512_wd0.001_lbg(0,1)_ks24_vs32/net-epoch-100.t7') encoder = base_loader.encoder base_voxel_dec = base_loader.voxel_dec
unsup_loader = torch.load(opt.checkpoint_dir .. 'arch_PTN_singleclass_nv24_adam1_bs6_nz512_wd0.001_lbg(1,0)_ks24_vs32/net-epoch-100.t7') unsup_voxel_dec = unsup_loader.voxel_dec
sup_loader = torch.load(opt.checkpoint_dir .. 'ptn_comb.t7') sup_voxel_dec = sup_loader.voxel_dec`
The results on the testset are: cat [chair]: CNN-VOL IOU = 0.459553 PTN-COMB IOU = 0.162989 PTN-PROJ IOU = 0.472389
which are 4 to 5 points lower than reported in the paper. What could be the reasons for it? Also I noticed the pretrained ptn-comb model has some problems (0.16) when evaluated with my encoder (instead of the pretrained encoder). What is the reason for this?