-
Hello~ Could you please release the dataset for distillation? It will be nice of you to release the dataset~
-
### Prerequisite
- [X] I have searched [the existing and past issues](https://github.com/open-mmlab/mmyolo/issues) but cannot get the expected help.
- [X] I have read the [FAQ documentation](https…
-
Thank you for sharing this great repo.
Can you please provide instructions, or code if available, for task distillation on the SQuAD dataset?
Thanks in advance
-
Hi, may I ask why the KL loss is used during validation? This doesn't match equation 9 in the paper which is a cross-entropy loss.
-
Hiiie the work is fantastic
Was trying my hands on distillation using your code
Did following till now and need your guidance to move forward pls
1. I have used the pretrained model given at the li…
-
Hello ,
Can i use jetson nano kit for training models in your repository with my custom dataset?
would the specification of jetson nano kit fine to train models design according to distillatio…
-
### Before Asking
- [X] I have read the [README](https://github.com/meituan/YOLOv6/blob/main/README.md) carefully. 我已经仔细阅读了README上的操作指引。
- [ ] I want to train my custom dataset, and I have read the …
-
Approved by @SalmanMohammadi. This issue's purpose is to determine which contrastive optimization methods should be added in torchtune and track methods that are implementing right now. For all of the…
-
distil-whisper load dataset such as common_voice which can be accessed on huggingface.
But loading the private speech dataset is not supported.
I implement one method to load local speech data…
-
### Title: [Curriculum Learning for Dense Retrieval Distillation](https://dblp.org/pid/279/6463.html)
### year: 2022
### Venue: SIGIR
### Main Problem
This research aims to improve the performan…