-
### Search before asking
- [X] I have searched the Autodistill [issues](https://github.com/autodistill/autodistill/issues) and found no similar feature requests.
### Description
I think combining …
-
Congratulations, this work has been accepted by ICCV2023.
When to release the code?
-
hi,
I am working on a custom medical imaging binary classification task. I am comparing performances between DINO(VIT base) and DINOv2(VIT large) after training DINO and DINOv2 models, with eval_l…
-
Related issues:
- #6
- #25
- #47
- #80
- #84
- #99
-
Thanks for the great work. We have trained a [ControlNet+StableDiffusion based on the SAM segmentation mask](https://github.com/sail-sg/EditAnything) for fine-grained image generation.
Replacing the …
-
@luca-medeiros
These are the changes that I have had done
I am trying to use LangSam for SAM2 model. I have had changed the code and made it compatible for SAM2 but now I am encountering issue .
T…
-
Thanks for your work! I have some questions about model distillation.
"we leverage the same training loop with a few exceptions: we use a larger
model as a frozen teacher, keep a spare EMA of the st…
-
If I want to run G-DINO multiple times on one prompt alone,
can I save sometime in the inference somhow ?
or How could I distill/decrease the model wights/inference time when I know I have 1 prompt …
-
```
$ arecord -l
**** List of CAPTURE Hardware Devices ****
card 0: SB [HDA ATI SB], device 0: ALC887-VD Analog [ALC887-VD Analog]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: SB [HDA …
-
e.g. to support CoralNet