-
Hey @FreddeFrallan, amazing contribution. I wanted to ask if it is possible to share the code of translating and generating the CLIP embedding dataset. I wanted to train in few languages which were no…
-
你好作者!
再次打扰:
这是我的损失实现应该是普通的蒸馏:
def knowledge_distillation_kl_div_loss(pred,
soft_label,
T,
…
-
-
Hello,
Thanks for the awesome work and release of the model and test code. Can you release the training code and suggest a dataset.
I want to train it for high res faces.
-
This is my situation.
I trained base_cnn in advance using cifar10 dataset for comparing performance between base_cnn and cnn_distill.
Also, I trained base_resnet18 as a teacher using same dataset…
K-Won updated
5 years ago
-
---
## 🚀 Feature
Separate docs website, or much more detail in readmes throughout the website.
## Motivation & Examples
Tell us why the feature is useful.
* Package in its current form is ver…
-
Hi @raphaelsty,
first of all, thanks a lot for your this project. I really appreciate its simplicity and effectiveness.
Question: do you have any plans to implement ColBERT V2?
Best wishes.
-
**What is the feature?**
In order to effectively use [SAHI](https://github.com/obss/sahi) today, I have to train a new model based on the patches that SAHI creates and the only way I've found out to …
-
Hi, thanks again for sharing this project.
I would to ask some details about “Multi-space Alignment”.
![image](https://github.com/user-attachments/assets/43f3f92e-19e2-4354-80b8-6219619328ba)
I…
-
```
import argparse
import logging
import os
import pdb
from torch.autograd import Variable
import os.path as osp
import torch
from torch.optim.lr_scheduler import StepLR, MultiStepLR
import …