elliottwu / DOVE

🕊 DOVE: Learning Deformable 3D Objects by Watching Videos (IJCV 2023)
https://dove3d.github.io/
MIT License
22 stars 0 forks source link

Visualization #1

Open deeplylearner opened 1 month ago

deeplylearner commented 1 month ago

Can you describe the steps of the visualization below, I've completed the testing session, but I can't get through the visualization program.

elliottwu commented 4 weeks ago

After running the test script, it should create a folder with intermediate results. This line in the visualization script loads these results: https://github.com/elliottwu/DOVE/blob/61e128f444165908d6e8a55f6766614834e2680a/scripts/render_visual.py#L468.

You should be able to start tracing the issues from here.

deeplylearner commented 4 weeks ago

Thank you for your answer, I've completed the training, testing, and visualization process. But I wondered, how do I use my own single image for testing and visualization?

elliottwu commented 4 weeks ago

You need to crop the images around the instances first, similar to the examples in the provided datasets. You may do so manually or use an automatic segmentation model.

Then, prepare the data in the same way as the provided test images. One example you can refer to is the toy bird dataset: https://github.com/elliottwu/DOVE/blob/main/config/bird/test_bird_toy.yml.

The code might expect a mask image and a bounding box file for each image, but they are not really used during inference. So, you could create dummy (all white) images for the masks. For the *_box.txt, you could simply write (assuming the crops are resized to 256:

0, 0, 0, 256, 256, 256, 256, 0

which stand for global_frame_id, crop_x0, crop_y0, crop_w, crop_h, full_w, full_h, sharpness, as documented here.