facebookresearch / vissl

VISSL is FAIR's library of extensible, modular and scalable components for SOTA Self-Supervised Learning with images.
https://vissl.ai
MIT License
3.24k stars 330 forks source link

Model distillation and pruning #524

Closed seekingdeep closed 2 years ago

seekingdeep commented 2 years ago

Hi there,

inorder to reduce model size and compute, can a self-supervised model be distilled to predict only a single object of interest, say cars. By feeding it a few examples of cars, and then prune anything that doesn't largely contribute to the prediction of "cars".