facebookresearch / dinov2

PyTorch code and models for the DINOv2 self-supervised learning method.
Apache License 2.0
9.28k stars 830 forks source link

Possible to continue pretraining? #204

Open rohan-mehta opened 1 year ago

rohan-mehta commented 1 year ago

I know the general recommendation is to leave the backbone frozen and train task-specific heads. However I'm interested in continuing pre-training to better fit the backbone features to my dataset. Is this feasible?

I did try it, with the steps below, but found that KNN performance started to degrade very quickly - after 20-30K images, it dropped below 10%. My approach:

  1. Load the provided backbone weights into the [SSLMetaArch](https://github.com/facebookresearch/dinov2/blob/main/dinov2/train/ssl_meta_arch.py#L31) model. This provides weights for everything except the dino/ibot heads, so train just those heads on a few 100K samples (i.e. rest of the model is frozen). At the end of this, the model still has good KNN performance.
  2. Unfreeze the entire model, and run train.py on ImageNet-1K (I first tried on my own dataset, but switched to ImageNet just to make sure its not a dataset issue). When I do this, the performance degrades quickly. After 10K images, KNN performance goes from 80%->25%. After another 10K, its ~10%. And it keeps dropping like that.

I'm using the standard VIT-L config provided in the repo.

So I'm looking for any advice or thoughts here - am I approaching this wrong? Or is this somehow not possible? Thanks!

qasfb commented 1 year ago

So this is not possible because we don't provide the training heads at the moment. We haven't explored this case in depth so I can't tell you what would be the right way, but we may plan to do it in the future.

rohan-mehta commented 1 year ago

@qasfb the codebase does have a “dino_head” and a “ibot_head”. Are those not the training heads?

And as mentioned I first trained just those heads

tZimmermann98 commented 1 year ago

@rohan-mehta did you manage to make some progress on this in between? Currently facing the same issue and would like to continue pre-training to adapt to the domain of my use case. How long did you kept the Backbone frozen in the first step?

rohan-mehta commented 1 year ago

@tZimmermann98 didn't manage to make it work unfortunately. I kept the backbone frozen for between 50k and 300k samples. That part went fine. But then as mentioned above, when I unfroze the entire model and started training, performance on KNN degraded quickly.

MarioAvolio commented 1 year ago

So, actually we can't execute training on a custom dataset?

csaroff commented 11 months ago

@qasfb fwiw, we'd be really interested in doing this as well!

echochoc commented 10 months ago

In my case, my training dataset will periodically grow over time, continuously incorporating images from new categories. I hope to train the model with long-life learning. Any ideas?

Badar-e-Alam commented 6 months ago

@csaroff @MarioAvolio @rohan-mehta , Here's what I did: Initially, I froze the feature extractor and solely focused on training the classifier head. This approach yielded approximately 85% accuracy. However, I then decided to fine-tune both the feature extractor and the classifier head, resulting in an 8% increase in accuracy. Just to give you some context, I'm working with a custom dataset comprising roughly 38,000 samples.

vanpersie32 commented 6 months ago

@Badar-e-Alam Thank you Alam. I am doing continue pretraining on my custom dataset. May I have your Facebook Id, I want to chat with you.

dimidagd commented 3 months ago

@Badar-e-Alam Thank you Alam. I am doing continue pretraining on my custom dataset. May I have your Facebook Id, I want to chat with you.

Hi @vanpersie32 can we get in contact somehow? :)

vanpersie32 commented 1 month ago

@Badar-e-Alam Thank you Alam. I am doing continue pretraining on my custom dataset. May I have your Facebook Id, I want to chat with you.

Hi @vanpersie32 can we get in contact somehow? :)

sorry I didn't notice the message, my email is 18810388176@163.com.Hope to contact with you.

Badar-e-Alam commented 1 month ago

@vanpersie32 Apologies for the late response. Please check your email.

wheel-is commented 1 month ago

https://github.com/csaroff/dinov2/tree/main/sky

Badar-e-Alam commented 1 month ago

https://github.com/Badar-e-Alam/DINOv2_Downstream