-
Hello again!
Would it be possible to modify the GMP fine tune script to train a LoRA with PEFT for the CLIP VIT-G model? Then merge the LoRA with the model to get a new CLIP-G model?
Chat-GPT se…
-
Hello, I conducted experiments on the CIFAR-FS and FC100 datasets, using the FewTURE pre-trained _Swin_Vit 800-epoch_ weights, and other configurations refer to the code provided in the README file. T…
-
Hello,
l am using VIT on images with 2 classes.
['horses', 'humans']
model= ViT(
image_size = 256,
patch_size = 32,
num_classes = 2,
dim = 1024,
depth = 6,
heads = …
-
Could you please advise on how to change the backbone of the SimSiam example to ViT? (https://docs.lightly.ai/self-supervised-learning/tutorials/package/tutorial_simsiam_esa.html)
Additionally, for…
-
Hello, Thank you for your great work.
I have a couple of specific questions regarding the experiment settings and results:
1. Are the OpenAI model training settings identical to the settings in …
-
### Feature request
I wonder if the task text-classification can to be supported in the ONNX export for clip? Ich want to use the openai/clip-vit-large-path14 model for zero-shot image classification…
-
I am trying to perform fine-tuning with different quantized models of ViT and it seems that modeling_vit.py does not support it.
When:
```
train_dataloader = DataLoader(client_trainset, batch_si…
-
--> Config model
done
--> Loading model
I It is recommended onnx opset 19, but your onnx model opset is 13!
I Model converted from pytorch, 'opset_version' should be set 19 in torch.onnx.export fo…
-
This is a great project that provides a generic solution to the medical research segmentation problem and facilitates the training of your own segmentation models. After installing it and then clickin…
lcgmu updated
2 weeks ago
-
please use wget https://storage.googleapis.com/vit_models/imagenet21k/R50+ViT-B_16.npz to get the correct pretrained model.