yoshitomo-matsubara / torchdistill

A coding-free framework built on PyTorch for reproducible deep learning studies. 🏆25 knowledge distillation methods presented at CVPR, ICLR, ECCV, NeurIPS, ICCV, etc are implemented so far. 🎁 Trained models, training logs and configurations are available for ensuring the reproducibiliy and benchmark.
https://yoshitomo-matsubara.net/torchdistill/
MIT License
1.35k stars 132 forks source link

Distilling Knowledge from a image classification model with sigmoid function and binary cross entropy #211

Closed publioelon closed 2 years ago

publioelon commented 2 years ago

Hi, I found this paper and github and it looks robust. I was wondering if it is possible to use your framework to distill knowledge from a cumbersome model used for image classification that uses sigmoid function for classification and binary cross entropy for loss computation. Since it is a cumbersome model trained on a custom dataset, I would like to know if I can use your framework to distill the knowledge to a smaller network that actually uses softmax for binary cross entropy, and what are the steps required to make it so?

yoshitomo-matsubara commented 2 years ago

Hi @publioelon

Yes, I think you can design the experiments with torchdistill easily. If you further clarify your settings, I can give you the steps to do that.

  1. How many classes does your dataset have? Is it a binary-classification task (i.e., 2 classes)?
  2. Why do you want to use the output from the sigmoid function for computing a loss? Because you use BinaryCrossEntropy module in PyTorch?
  3. What model architectures do you want to use as teacher and student models? If they are your designed models, tell me about the input patch size (e.g., 224 x 224) and output shape

Also, please use Discussions tab above for questions. As explained here, I want to keep Issues mainly for bug reports.

publioelon commented 2 years ago

Hello @yoshitomo-matsubara thank you for replying and I apologize for not starting a discussion in the right thread.

EDIT: should I close this issue and re-open a thread in discussion?

I have a cumbersome model that does very well (high accuracy) on a fever classification task using thermal images. It is trained using transfer learning from a VGG16 architecture and the input shape is 128x160. it has two classes fever and healthy. From the papers I've read and experiments I've noticed they usually compute the KLDivergence loss between two softmax outputs. Due to the limited samples in my dataset and because I need to have softmax classification layer instead of sigmoid without retraining from scratch, I need to rely on Knowledge Distillation to compress the model size for single board computers that uses hardware accelerator.

Basically, I have a cumbersome tensorflow .h5 model that uses binary cross entropy (not sure why it uses binary cross entropy) I want to compress it to a smaller model that uses softmax for classification, so I can run it using edge tpu accelerator

yoshitomo-matsubara commented 2 years ago

Hi @publioelon No worries, can you close this issue and migrate you comment(s) to discussion? I think there will be multiple interactions for this and you will have some followup questions through experiments, which Discussion would be more convenient for me.