autonomousvision / occupancy_networks

This repository contains the code for the paper "Occupancy Networks - Learning 3D Reconstruction in Function Space"
https://avg.is.tuebingen.mpg.de/publications/occupancy-networks
MIT License
1.5k stars 292 forks source link

Obtaining the "input" results for the voxel use case #31

Closed noamgat closed 4 years ago

noamgat commented 4 years ago

Hi,

I'm doing followup research on the voxels use case of this paper, and am trying to reproduce the results of the paper before continuing.

I install the environment on ubuntu, and run the following:

python eval.py configs/voxels/onet_pretrained.yml

I obtain the following results (ran once on chairs dataset only, and once on everything):

Chairs Only iou iou_voxels kl loss rec_error class name n/a 0.663234 0.659272 0.0 81.09695 81.09695 mean 0.663234 0.659272 0.0 81.09695 81.09695

Everything iou iou_voxels kl loss rec_error class name n/a 0.695912 0.68121 0.0 57.70389 57.70389 mean 0.695912 0.68121 0.0 57.70389 57.70389

What is the difference between iou and iou_voxels? Paper says Input IOU 0.631 ONet IOU 0.703

How was the 0.631 obtained, and is it possible to reproduce the ONet IOU with the supplied code?

LMescheder commented 4 years ago

Hi @noamgat, iou is the continuous IOU wrt the GT mesh, while iou_voxels is the iou wrt voxelized mesh when you naively voxelize our prediction at resolution 32^3 (i.e. evaluate the network in 32^3 grid). iou_voxels should only be taken as a rough hint if the model is doing what it is supposed to do, not as an evaluation metric. See here for implementation details

For the evaluation in the paper we actually do not use eval.py (which is for quick & dirty evaluation), but instead eval_meshes.py, which evaluates the predicted meshes in a standardized way.

To evaluate the input meshes, you have to first generate the meshes and then run eval_meshes.py with the --eval_input flag.

noamgat commented 4 years ago

Thanks for the answer! So the way to generate the results would be:

python generate.py configs/voxels/onet_pretrained.yaml python eval_meshes.py configs/voxels/onet_pretrained.yaml

and compare that to

python eval_meshes.py --eval_input configs/voxels/onet_pretrained.yaml

If so, I am getting a lot of "Warning: contains1 != contains2 for some points." during the eval_meshes runs, and then the results look meaningless: 100%|| 8751/8751 [1:00:18<00:00, 2.59it/s] accuracy (mesh) accuracy2 (mesh) chamfer (mesh) ... normals (mesh) normals accuracy (mesh) normals completeness (mesh) class name ... n/a 0.009669 0.000222 0.000771 ... 0.876847 0.884786 0.868908 mean 0.009669 0.000222 0.000771 ... 0.876847 0.884786 0.868908

The run of generate.py seemed clean.

However, the generated meshes do look reasonable. cc1b4eb1a9164e04f06885bd08de3f64.zip

Any idea on what could be going wrong?

LMescheder commented 4 years ago

@noamgat Ok, I will have a look into the issue. Thanks for reporting it!

LMescheder commented 4 years ago

@noamgat Just wanted to confirm that I can reproduce the problem. Somehow, this only happens when I run the code on the compressed data that we uploaded, but does not happen when I use our internal, uncompressed dataset (which we used to produce the results in the paper). Maybe something went wrong when I compressed the dataset. Might take me some time to fix the problem, but I am at it.

noamgat commented 4 years ago

Thank you very much for the response and attention! If there is anything I can do to help, let me know.

On Wed, Oct 30, 2019 at 6:23 PM Lars Mescheder notifications@github.com wrote:

@noamgat https://github.com/noamgat Just wanted to confirm that I can reproduce the problem. Somehow, this only happens when I run the code on the compressed data that we uploaded, but does not happen when I use our internal, uncompressed dataset (which we used to produce the results in the paper). Maybe something went wrong when I compressed the dataset. Might take me some time to fix the problem, but I am at it.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/autonomousvision/occupancy_networks/issues/31?email_source=notifications&email_token=AAKFA2GGIILRE25DMTRBIFDQRGYJXA5CNFSM4JEV5UJKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECU3A5A#issuecomment-547991668, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKFA2AJFVMGRD5BEPTCPIDQRGYJZANCNFSM4JEV5UJA .

LMescheder commented 4 years ago

Fixed in 0c44de4e58ebf6119060248e1e35484197947797. I now get IOU of 0.63099 for the input and 0.70344 for output which is identical with what we reported in the paper. When debugging I also found that the metadata.yaml was missing in the dataset we released, so there results were not sorted by class. I will push the updated dataset later.

LMescheder commented 4 years ago

34

noamgat commented 4 years ago

I can confirm that the situation is much better. I now have 0.620 for the input and 0.696 for the pretrained model. While not 100% the same as what you are reporting, this is much better than before.

LMescheder commented 4 years ago

@noamgat I believe this is due to the way how the results are averaged. In the paper we compute all statistics per class and then average over all classes. Because of the missing metadata.yaml file (issue: #34), the test code averaged over all models directly. I just pushed the updated dataset, the error should not occur anymore. If you do not want to download the whole dataset again, you can just copy the data/metadata.yaml file into data/ShapeNet.