Closed jack-Singapore closed 2 months ago
Hi, we haven't tried the cross-model distillation (large -> small) in our metric depth estimation. You may have a try :)
The distillation process in our relative depth estimation is very simple. First compute the cosine similarity between our online trained features and frozen DINOv2 features, and then maximize the similarity until satisfying a tolerance margin (our V1 paper details this). In the cross-model case you mentioned, you need to add an extra linear projector to align the feature dimensions of different models.
Thanks for the reply and the hints!!!
Hi, congratulations on the release of V2. I found the accompanying paper quite interesting
I have a question regarding the performance differences between two approaches in metric depth estimation models. Is there a performance difference between distilling knowledge from a larger metric model to a smaller metric model versus fine-tuning a smaller relative model for metric depth estimation? Which one do you think is better?
Will the framework/code for model distillation be released in this repo? Thanks