Closed yokies closed 1 year ago
I find that in your paper Fig 5 (a), two object id is Pitcher and Cheese Box, but in 4.2 Train/test biasness part shows that "We introduce such pose biasness in two of the six objects, pitcher and driller." Which object would match the result in Fig5 (a) id2?
Thanks for your question! In the updated version, we fixed a previous OOD problem caused by detection. For your question, please try to use a relatively smaller learning rate: e.g., 5e-5 or 1e-5 for the bi-level optimization. Yes, if you use our provided NeRF model and dataset, the config is similar to the YCB-synthetic dataset, and the results would match Fig.5 (a). We wrote a new wrapper for the main function so that the detailed hyperparameter may need a little adjustment accordingly. (For instance, the initial pose distribution, learning rate, optimizer)
Thanks for your answer. Besides, I find some problem in your dataset here, object 11. It may have some noise in backgroud, so your get_annotation function may return wrong annoation, which may be like: The cv2.threshold result in:
Good point! Yes, we find the same issue that the NeRF may not be perfect especially when the training data has heavy shadow. We added some rules to eliminate the noise pixel, while the foreground extraction function may still need some improvement.
I just clone the latest code in github and run the step in readme.md file with the following changes(I download pretrained NeRF models and created sample dataset with BlenderProc here):
I think this config is similar to experiment YCB-synthetic dataset in your paper with id2 and no overlap setting. And the result would match the result in Fig.5 (a), which show the AP achieve 95%-100%. However, I get the "AP-2" in "save_result.txt" about 85% after 35 epochs and stop to rise. Do you have any idea to solve this problem?