-
Sik-Ho Tang. [Brief Review — Distilling Visual Priors from Self-Supervised Learning](https://sh-tsang.medium.com/brief-review-distilling-visual-priors-from-self-supervised-learning-e8377118e797).
-
-
I tried to reproduce the 80% top1 performance of SlimMobilenet(V5), but can only get around 72.3% top 1 accuracy. Would you release the pretrained model in the future?
-
Running either train_lcm_distill_sd_wds.py or train_lcm_distill_sdxl_wds.py would running into missing argument error:
```
Traceback (most recent call last):
File "/home/smhu/diffusers/examples/c…
-
Hi @xinyu1205 , thank you for sharing this work. In the paper there is a mention of using CLIP to get image embeddings
> We also adopt the CLIP image encoder to distill image feature, which further…
-
**Here we describe the process of:**
1. creating a master INDEX ([INDEXofOIL186Dictionaries.md](https://github.com/petermr/CEVOpen/blob/master/dictionary/INDEXofOIL186Dictionaries.md))of [DictionaryN…
-
Hi, the pretrained data used allava images both from laion nad vfan.
But the laion part image names are totally different from ALLava's images format.
I tried to found:
```
465440.jpeg
3206…
-
Fast-NTK: Parameter-Efficient Unlearning for Large-Scale Models
https://arxiv.org/abs/2312.14923
-
-
Thank you for your great work and release of source code!
I have some questions about ABR loss.
In the paper, you say that using unbiased losses directly is not feasible because the ABR input im…