-
theres a bunch of examples in the python version, including documentation here https://jxnl.github.io/instructor/examples/
here are all the examples, only a few of them are interesting so use the c…
jxnl updated
9 months ago
-
Hello, I would like to ask, where is the method of training this code with a distillation algorithm? Because I see that the network of teachers and students is the same, so there is no distillation, r…
-
Hi, thanks again for sharing this project.
I would to ask some details about “Multi-space Alignment”.
![image](https://github.com/user-attachments/assets/43f3f92e-19e2-4354-80b8-6219619328ba)
I…
-
[Paper](https://arxiv.org/abs/2104.14294)
[Code](https://github.com/facebookresearch/dino)
Authors:
Mathilde Caron, Hugo Touvron, etc.
FBAI.
![](https://raw.githubusercontent.com/fac…
XFeiF updated
3 years ago
-
機械翻訳に関するAEとKDに関する論文の自分なりのまとめをここに書きます
とりあえずたてておきます
随時更新していきます
# 機械翻訳に対するAdversarial example
基本的には文字・単語の改変を行う
## Black box
### synthetic noize
replace、swap、delete等、ルールベースに文字・単語を改変
### …
-
## DINOv1: Emerging Properties in Self-Supervised Vision Transformers
* \[[`blog`](https://ai.facebook.com/blog/dino-paws-computer-vision-with-self-supervised-transformers-and-10x-more-efficient-tr…
-
논문 요약 :
"Teacher를 잘 모방하는 Student가 좋은 Student는 아닙니다. 하지만 대체로 Teacher를 잘 모방해야 Student가 Teacher와 유사한 성능을 보여줄 수는 있습니다. 이와 관련해서 다양하게 기존에 알려져있던 잘못된 통념이나 관념들을 바로 잡아준 논문입니다."
KD는 Teacher Model을 emulate하기 …
-
lbda1 updated
4 years ago
-
Hi,
Thanks for sharing this code and it's really helpful.
Recently I read your paper:"MSD: Multi-Self-Distillation Learning via Multi-classifiers within Deep Neural Networks".It's a very interesti…
-
z.B. BERT Tiny