DylanWusee / PointPWC

PointPWC-Net is a deep coarse-to-fine network designed for 3D scene flow estimation from 3D point clouds.
GNU General Public License v3.0
141 stars 23 forks source link

Replicating KITTI Results #19

Open qazmonk opened 2 years ago

qazmonk commented 2 years ago

Hello,

I'm trying to use this code to replicate the results claimed in the paper but when I run the self supervised training code provided they are far off. Starting from the given pre-trained weights and then running train_self.py on the KITTI data gives an EPE of around 0.08, almost double what is in the paper. Could you please show the training schedule used to achieve the results from the paper?

DylanWusee commented 2 years ago

Hi,

Thanks for running our code.

Unfortunately, I don't have a pretrained model now. But, I can try to get one when I am available.

Besides, I want to make sure that you have trained the model correctly.

There are two steps in training:

  1. start training from scratch with the command: python3 train_self.py config_train.yaml This step trains the model with only 1/4 of the original dataset.
  2. Then, fine-tune the model with the command: python3 train_self.py config_train_finetune.yaml This step trains the model with the full training set.

Note: change the data address and model address accordingly.

You don't need to load the provided pretrained model when training with self-supervised loss.

qazmonk commented 2 years ago

Hi,

Thanks for responding so quickly! I was trying to avoid doing the pre-training on FlyingThigns since it seems like it will take about a week on my GPU. Can the pretrained weights not be used to skip step 1?

DylanWusee commented 2 years ago

Hi,

If you are only working on self-supervised loss, then you will have to start from step 1.

Because the pre-trained weights is from supervised loss, if I remember correctly.

I will try my best to upload a self-supervised pretrained model.

qazmonk commented 2 years ago

That's fine, I think it's ok to use supervised pre-training on synthetic data for my purposes. I think the problem might just be that the training scripts detects the pretrained weights as starting at epoch 730ish so it only does 70 more epochs. Should the fine-tuning be run for the full 800?

DylanWusee commented 2 years ago

If you use the pretrain weights, then you don't need to fine-tune at all. This model should already give you 0.04~.

It depends on if the model converge or not. if the model does not converge, you will have to train more.

The reason that I didn't change the epoch id in the code is because usually if you follow the instruction, the best model achieved around 400 epochs and you can continue to finetune to 800 epochs.

qazmonk commented 2 years ago

I was trying to replicate the KITTI Full + Self results at the bottom of Table 1. Unless I'm misunderstanding that requires pretraining on FlyingThings and then doing the self supervised fine tuning on KITTI. Otherwise it seems like you get an EPE of about 0.7? So should I just train on KITTI for about 400 more epochs to replicate that result?

DylanWusee commented 2 years ago

Your understanding is correct.

But you will have to train more. You can change the https://github.com/DylanWusee/PointPWC/blob/f882e4c396b338d6144bf3d07b335ccaf155bdcf/train_self.py#L124

to init_epoch = 0.

So that it can train more.

Training on Kitti dataset is quite fast since there are only over 100 scenes.

qazmonk commented 2 years ago

Great, thank you so much for your help