autonomousvision / occupancy_networks

This repository contains the code for the paper "Occupancy Networks - Learning 3D Reconstruction in Function Space"
https://avg.is.tuebingen.mpg.de/publications/occupancy-networks
MIT License
1.51k stars 292 forks source link

about test data in paper #59

Closed fashionguy closed 4 years ago

fashionguy commented 4 years ago

The test images are the images used in the training phase? English is not my native language. My English is not very good. I am learning. image

AlexsaseXie commented 4 years ago

To illustrate the representation power of this occ representation, both train set and test set are the same as the set of all 3D shapes from 'chair' class. That means to train OccNet on all 'chair' shapes, then examine how well OccNet can reconstruct the 3D shapes of chairs. For each 3D shape, the input for OccNet is a unique embedding vector. If the reconsturctions are well, the representation power should be strong.

fashionguy commented 4 years ago

You mean this images for test have been seen by the ONet?
If so, how about the reconstruction power of the unseen image. Generalization ability for unfamiliar images.
And do other papers do the same test way? For example, pixel2mesh, I remember that his test image is not seen. image

AlexsaseXie commented 4 years ago

OccNet paper does have experiment on single-view image reconstruction. Plz read the paper carefully XD. For single-view image reconstruction, there is train/val/test split. That means all single-view images and 3D shapes in test phase are unseen.

fashionguy commented 4 years ago

Thanks for your time.