-
## 論文リンク
- [arXiv](https://arxiv.org/abs/2304.07193)
- [github](https://github.com/facebookresearch/dinov2)
## 公開日(yyyy/mm/dd)
2023/04/14
## 概要
### Research Question
研究で明らかにしたい問を端的に表したも…
-
We are following the concerns being raised about this study both publicly on this forum (#23, #20, #21), on pubpeer (https://pubpeer.com/publications/C8CFF9DB8F11A586CBF9BD53402001), and privately. Mo…
-
Hi, I am looking for the 22k-supervised fine-tuning ConvNeXt-V2-H model without 1k-supervised fine-tuning. I want to use it to fine-tune on ade20k, reproducing the result in Table 7 of the paper.
-
AutoModelForCausalLM 中class没有chatglm你是如何解决的呢
-
Hello,I wonder the performance of supervised finetune using CONTRIQUE encoder compared to imagenet pretrained model, but I can't find such exp in paper.
can you share the results if you have done…
-
Thanks a lot for releasing the code and the scripts for pre-training.
I'm trying to reproduce the numbers on MS-Marco after fine-tuning and it would be great if you could also release the scripts f…
-
Hello HsinYing, nice work!
I am wondering how do you finetune the unsupervised UCF101 dataset. In your paper, you report mean classification accuracy over the 3 splits of UCF101 dataset. Could you pl…
-
Hi:
After reading your paper and code. I have two questions.
First. In pretrain-gnns-master/model_gin, there are several supervised_.pth files, how to get them?
Second, in README.md Fi…
-
### Request for Release of Pretrained NLLB-LLM2Vec Model
Hello Team,
Could you please release the pretrained NLLB-LLM2Vec models mentioned in your paper on "Self-Distillation for Model Stacking…
-
### Is your feature request related to a problem? Please describe.
_No response_
### Solutions
lora微调长文本后,输出内容会有重新。
训练参数
```
CUDA_VISIBLE_DEVICES=1,2 torchrun --nproc_per_node 2 supervised_finet…