NVlabs / nvdiffrec

Official code for the CVPR 2022 (oral) paper "Extracting Triangular 3D Models, Materials, and Lighting From Images".
Other
2.1k stars 222 forks source link

About how to generate NERF validate database and if it have some rule to make test data as validation. #7

Closed hzhshok closed 2 years ago

hzhshok commented 2 years ago

Hello, Now, i want to make my training data to generate 3d model, but i saw it have three kinds of database(train/test/val), and the val database maybe not used(???), so who can help share the way to how to generate validate_database(corresponding to the data folder test) and what rule to make it as TEST data? because as what i understand the test database should be the accurate data from the implementation. In addition, i see this kind of file(r_0_depth_0000.png) are not used inside implementation, right?

 What i know:
 a. i know the train database(data folder 'data') and **transforms_train.json** can generate by **COLMAP,** but don't know what rule to generate TEST data.

So,  someone can share the information about my issue please!

Regards

jmunkberg commented 2 years ago

The naming is a bit confusing. We do indeed use the test data, and not the validation data in the code. See e.g https://github.com/NVlabs/nvdiffrec/blob/main/train.py#L562

        elif os.path.isfile(os.path.join(FLAGS.ref_mesh, 'transforms_train.json')):
            dataset_train    = DatasetNERF(os.path.join(FLAGS.ref_mesh, 'transforms_train.json'), FLAGS, examples=(FLAGS.iter+1)*FLAGS.batch)
            dataset_validate = DatasetNERF(os.path.join(FLAGS.ref_mesh, 'transforms_test.json'), FLAGS)

We are not using depth supervision in the code. It can easily be added, as an additional guide if you have it available, and does indeed help, but is not applicable from photos, where we don't have access to depth info.

For generating accurate poses, we have been using COLMAP. Another issue suggested something else (but I haven't tried) https://github.com/NVlabs/nvdiffrec/issues/3

jmunkberg commented 2 years ago

The procedure from the original NeRF paper should work as well https://github.com/bmild/nerf#generating-poses-for-your-own-scenes

hzhshok commented 2 years ago

Thanks @jmunkberg for you response!

So, jmunkberg you mean that the 'test' data is no difference with 'train' data from the design idea? if so, then we can split our images to two pieces('train' and 'test'), then generate two transform information using COLMAP, is it right?

At this time i actually did not fully understand the whole implementation, and from the responses of issues i found it seems like to be what i guessed, so i appreciate if you can give simple confirmation if it is that.

Regards

jmunkberg commented 2 years ago

Yes, the train and test data are just datasets, generated in the same way (using different camera poses for each set). If you have a large training set, you can e.g, set aside 10% of the frames and use as your test set if you want.

We have been sloppy with the distinction between validation and test data in the code, so essentially we train with the "train" set, and test/validate with the "test" dataset.

hzhshok commented 2 years ago

thanks @jmunkberg ! Got.