czczup / ViT-Adapter

[ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions
https://arxiv.org/abs/2205.08534
Apache License 2.0
1.26k stars 139 forks source link

Training steps on semantic segmentation #173

Open yuyu19970716 opened 6 months ago

yuyu19970716 commented 6 months ago

Hello, thank you very much for your work, I would like to ask, is your work here VIT-Adapter+mask2former the same as the work used to do semantic segmentation in DINOV2? Since there is no training step on semantic segmentation given in DINOV2, I saw your work and am looking forward to your answers!