-
### Search before asking
- [X] I have searched the Autodistill [issues](https://github.com/autodistill/autodistill/issues) and found no similar bug report.
### Bug
Hi
I am following this tutor…
-
Hello, I would like to ask, where is the method of training this code with a distillation algorithm? Because I see that the network of teachers and students is the same, so there is no distillation, r…
-
Thank you for your great work on this project and for sharing the code with it. I used the code given to Train and tried to recreate the work. But there was a problem. As you can see the best numbers …
-
Hi,
How can I use the provided repository for image-conditional training and evaluation (super-resolution)?
And which approach is recommended for such a task? consistency-distillation? or consiste…
-
Hello, thank you for sharing the great work.
I wonder when the code will be released.
Also, I have a question about the intuition of using an unmasked teacher in UMT and InternVideo2.
Why do you us…
-
I find it truly fascinating! Have you come across any methods similar to pruning, distillation, or quantization that could be applied to this model? While I'm aware of some size options, it would be t…
-
Hello, I find your work highly interesting and would like to cite it, when can we expect the paper to be published?
Furthermore, I was curious about the distillation loss. The configuration for th…
-
## 一言でいうと
セグメンテーションの演算効率向上を狙った研究。
非常に小さなネットワーク(Just-in-Time Network)を常に蒸留でオンライン学習し、スポーツや監視カメラなど特定シーンに対し特化させる。精度は教師(Mask-RCNN)とほぼ同等で、学習を含めた演算効率は5倍向上
### 論文リンク
https://arxiv.org/abs/1812.02699
##…
-
Hi, I download the checkpoints and put them in the output directory, then I use the commands to train the three models, but I can't get the same results as in the table. Is there anything else that n…
-
我有一个问题,这篇文章里引入“融合知识先验”,相当于用fusion_k作为groundtruth来训练模型,那么怎么保证这个模型生成出来的效果比其他模型生成的效果更好呢?