Open qazmonk opened 2 years ago
Hi,
Thanks for running our code.
Unfortunately, I don't have a pretrained model now. But, I can try to get one when I am available.
Besides, I want to make sure that you have trained the model correctly.
There are two steps in training:
Note: change the data address and model address accordingly.
You don't need to load the provided pretrained model when training with self-supervised loss.
Hi,
Thanks for responding so quickly! I was trying to avoid doing the pre-training on FlyingThigns since it seems like it will take about a week on my GPU. Can the pretrained weights not be used to skip step 1?
Hi,
If you are only working on self-supervised loss, then you will have to start from step 1.
Because the pre-trained weights is from supervised loss, if I remember correctly.
I will try my best to upload a self-supervised pretrained model.
That's fine, I think it's ok to use supervised pre-training on synthetic data for my purposes. I think the problem might just be that the training scripts detects the pretrained weights as starting at epoch 730ish so it only does 70 more epochs. Should the fine-tuning be run for the full 800?
If you use the pretrain weights, then you don't need to fine-tune at all. This model should already give you 0.04~.
It depends on if the model converge or not. if the model does not converge, you will have to train more.
The reason that I didn't change the epoch id in the code is because usually if you follow the instruction, the best model achieved around 400 epochs and you can continue to finetune to 800 epochs.
I was trying to replicate the KITTI Full + Self results at the bottom of Table 1. Unless I'm misunderstanding that requires pretraining on FlyingThings and then doing the self supervised fine tuning on KITTI. Otherwise it seems like you get an EPE of about 0.7? So should I just train on KITTI for about 400 more epochs to replicate that result?
Your understanding is correct.
But you will have to train more. You can change the https://github.com/DylanWusee/PointPWC/blob/f882e4c396b338d6144bf3d07b335ccaf155bdcf/train_self.py#L124
to init_epoch = 0.
So that it can train more.
Training on Kitti dataset is quite fast since there are only over 100 scenes.
Great, thank you so much for your help
Hello,
I'm trying to use this code to replicate the results claimed in the paper but when I run the self supervised training code provided they are far off. Starting from the given pre-trained weights and then running train_self.py on the KITTI data gives an EPE of around 0.08, almost double what is in the paper. Could you please show the training schedule used to achieve the results from the paper?