-
Thanks for your great work!
I want to finetune the model on my dataset, can you share a fine-tune scripts for further development?
Thanks!
-
In Table 3 & 4, is the same dataset used during pre-training and fine-tuning? Or does the fine-tuning only happened on ImageNet-1k dataset?
-
您好,我之前发送了微调代码申请,暂未得到回复,不知是否还有其他相关途径获取,谢谢!
-
Hello! is that possible to provide image to image fineting code example like [instruct pix2pix](https://github.com/huggingface/diffusers/tree/main/examples/instruct_pix2pix)?
-
Hello folks,
I am trying to fine-tune GliNER on custom dataset (LMR/LMD) and after some steps, I encountered this issue :
```
File "~/gliner_finetuing.py", line 79, in
main()
File "~/…
-
Hey, I recently tried to use PiSSA init during the LoRA finetuing process, however I noticed that the time takes to init with PiSSA is so long.
I set all linear layer as my LoRA target layer and the…
-
It's quite easy to finetune one of the Open AI CLIP checkpoints with this codebase:
https://github.com/Zasder3/train-CLIP-FT
Uses pytorch-lightning. May be worth pursuing
-
Hi,
I executed the W2A8 ImageNet finetuing using the script by directly use **n_bits_w = 2, n_bits_a = 8**.
But it produces unexpected results in the W2A8 setting. Could you please advise if there…
-
features = self.dino_block.forward_features(x.to("cuda"))['x_norm_patchtokens']
File "/root/.cache/torch/hub/facebookresearch_dinov2_main/dinov2/models/vision_transformer.py", line 258, in forward_…
-
A year later, unimim still has not released the source code. A year ago, I spent a long time trying to replicate and analyze this paper, but was unable to do so. Are there any plans to make the source…