-
https://github.com/SsisyphusTao/SSD-Knowledge-Distillation/blob/0597fbee635afcf0b8710ba3a9e40ab9f010aea5/nets/multibox_loss.py#L158
-
https://github.com/txyugood/Knowledge_Distillation_AD_Paddle/blob/9742a6872e5b5fae4d30b7776fcfc230621b1aa4/test_functions.py#L21
-
File "/home/hongyang/codebase/pytorch_code/yolov5-knowledge-distillation/utils/loss.py", line 57, in compute_distillation_output_loss
t_lbox *= h['giou'] * h['dist']
KeyError: 'giou'
-
Hi, may I ask why the KL loss is used during validation? This doesn't match equation 9 in the paper which is a cross-entropy loss.
-
Hi, thanks for the great work. I wonder if you can release the knowledge distillation code, which can be helpful to reproduct the results of the paper. Thanks a lot.
-
Hello!
Recently, we are studying semantic segmentation and knowledge distillation. Can we update the QR code of the discussion group?
-
Recently, i tried to improve the performance of yolov7-tiny by distilling knowledge from yolov7. I have tried logits-based distillation and feature-based distillation. However, it didn't work for yolo…
-
Hi, there is a similar idea in "Correlation Congruence for Knowledge Distillation arXiv:1904.01802v1". Do you think which is more efficent ?
-
Hi,
Did you not use sequence level knowledge distillation for fastpeech training??
-
Title: Unable to Reproduce DKD Experiment Results on Tesla T4 Server Using Repository Code
Body:Dear maintainers,
I recently attempted to replicate the experiment results of Distilled Knowledge Dist…