yoshitomo-matsubara / torchdistill

A coding-free framework built on PyTorch for reproducible deep learning studies. 🏆25 knowledge distillation methods presented at CVPR, ICLR, ECCV, NeurIPS, ICCV, etc are implemented so far. 🎁 Trained models, training logs and configurations are available for ensuring the reproducibiliy and benchmark.
https://yoshitomo-matsubara.net/torchdistill/
MIT License
1.37k stars 132 forks source link

is tochdistill support knowlede distillation for Vision Foundation Models like Grounding Dino / Grounding DinoSAM ? #427

Closed solomonmanuelraj closed 11 months ago

solomonmanuelraj commented 11 months ago

Hi Team,

Currently i am working in Grounding Dino vision foundation model for object detection ( https://github.com/IDEA-Research/GroundingDINO). The model size is around 660 MB. I want to deploy it in the edge device and i like to use Grounding Dino model (as teacher model) for KD.

I want to know whether torchdistill package supports vision foundation model ? if it is so i want to know is there any sample link / demo code available for Vision foundation model KD.

thanks for your help.

yoshitomo-matsubara commented 11 months ago

Please read https://github.com/yoshitomo-matsubara/torchdistill#issues--questions--requests and use Discussions instead

Closing this as it's not a bug.