RaymondWang987 / NVDS

ICCV 2023 "Neural Video Depth Stabilizer" (NVDS) & TPAMI 2024 "NVDS+: Towards Efficient and Versatile Neural Stabilizer for Video Depth Estimation" (NVDS+)
MIT License
475 stars 24 forks source link

Questions about testing my own video data #8

Open yavon818 opened 1 year ago

yavon818 commented 1 year ago

Thanks for your great work! I wonder if I could use your pretrained model to test my video data directly, or should I finetune the model using the video data first, if I want to get the right results.

RaymondWang987 commented 1 year ago

Thanks for your great work! I wonder if I could use your pretrained model to test my video data directly, or should I finetune the model using the video data first, if I want to get the right results.

It depends on the domain gaps between your test data and our VDW training set. VDW dataset is highly diverse, thus VDW pre-trained NVDS can directly produce satisfactory results on many scenes without fine-tuning (e.g., zero-shot evaluations on Sintel and DAVIS dataset in our experiments). However, if the scenes of your test data are highly different from or rarely appear in the VDW dataset (e.g., surgical datasets in issue #6), the performance might degrade due to the domain gap between the training and test data. In this case, you can fine-tune NVDS for better performance.

Overall, you can try on the VDW pre-trained NVDS on your test scenarios. If the performance is not satisfactory for your usage, you should fine-tune NVDS for better performance. In most cases, fine-tuning will improve the quantitative performance on the certain closed domain but decrease the model generalization.