BlarkLee / MonoPLFlowNet

ECCV 2022, MonoPLFlowNet
MIT License
9 stars 3 forks source link

How to get the same results as the published paper? #3

Open 2020namemyself2020 opened 2 months ago

2020namemyself2020 commented 2 months ago

Sorry to interrupt you. These days I tried several times again. But I can't get the same results. I'd appreciate it if you could give me some advice.

First, I follow the same data process. I use "depth_flying" as depth_checkpoint and ''checkpoint_113.pth.tar" as scene flow checkpoint at "_https://drive.google.com/drive/folders/1MWX6ekn3k5JYeY3WIGo_QDo6thVTFX5B?usp=sharing_". Then I run _python monopl_main_semi_flyingthings3d.py configs/test_monoplflyingthings3d.yaml.

In the _monopl_main_semiflyingthings3d.py, I only insert "os.environ['CUDA_VISIBLE_DEVICES'] = '0'".

This is the _test_monoplflyingthings3d.yaml


ckpt_dir: test_results/flyingthings3d resume: resume/scene_fly/checkpoint_113.pth.tar data_root: /dataset/FlyingThings3D_subset depth_checkpoint_path: resume/depth_fly evaluate: True

unsymmetric: True

arch: PLSceneNet_shallow last_relu: False allow_less_points: True

use_leaky: True bcn_use_bias: True bcn_use_norm: True

batch_size: 1 full: True

scales_filter_map: [[1., 1, 1, 1], [0.5, 1, 1, 1], [0.25, 1, 1, 1], [0.125, 1, 1, 1]]

dim: 3 num_points: 8192

DEVICE: cuda

dataset: FlyingThings3DMonopl_self

remove_ground: True

full: True

data_process: DEPTH_THRESHOLD: 35. NO_CORR: False #True

print_freq: 1 workers: 8

min_depth_eval: 0.001 max_depth_eval: 35.

BlarkLee commented 1 month ago

Thanks for your interest in this work! As for the 3D sceneflow, we end-to-end train the 3D sceneflow with the depth module, however in evaluation, we do it separately. This is because at that time there was no existing works that estimate real-scale 3D sceneflow from pure images to compare - image based work estimate scale-ambiguous flow and the real-scale 3D sceneflow can be only generated by the point cloud from LiDAR, so for a fair comparison for 3D sceneflow evaluation, we generate point cloud from depth group truth, in this way we could try to bring the initial pointcloud (before translated by sceneflow vectors) to be coarsely at the same position as the pointcloud from LiDAR-based work, so as to conduct a fair comparison to the 3D vectors generated by LiDAR-based works.

2020namemyself2020 commented 1 month ago

Thanks for your reply sincerely.

But _monopl_main_semiflyingthings3d.py uses estimated depth. It seems that there is no evaluation file using depth ground truth. Could you please provide an evaluation of the scene flow using depth ground truth?

Best wishes