-
**Describe**
Model I am using (UniLM, MiniLM, LayoutLM ...): VLMO/BEiTv3
Is there any chance to share pre-training datasets used in VLMO/BEiTv3 through Baidu Net Disk or Google Cloud, as many imag…
-
What can I do about the problems I ran into running the train. py file?
File "C:\Users\admin\AppData\Local\Programs\Python\Python35\lib\site-packages\numpy\core\shape_base.py", line 283, in vstack
…
-
Hello, are there any pre-trained parameters? The task I am doing now will produce underfitting without pre-training, and I am using your graphormer for pre-training, which is slow. Is there any soluti…
-
When I try to run the code below I get this error at the pretrain function:
**Error**
```
File "C:\Users\fabio\Desktop\wetransfer-08d028\Rope_ex_v1.5\RL_Training\behaviour_cloning.py", line 40, i…
-
The current approach to finetune the model on downstream task is to freeze the Dinov2 backbone and finetune the head, but i want to use large unlabelled dataset to continue training the Dinov2 to fit …
-
Hi, are the weights of the fMRI pre-trained encoder available? I would like to use the pre-trained model as it is, without further pre-training
-
Im trying to pretrain primera on processed NewsHead dataset. Can you help me with a little more detail to implement it?
-
In your paper, the accuracy of the food data set reached 94%. What version of bert and vit did you use? When I reproduced your code, I found that you only provided the base version.
-
Hi,can you offer the pre-training weight about RODNet-HG and RODNet-HGWI?
-
Hello, it's Arthur,
Thank you for your great work.
I would like to ask whether it is possible to get the specific lag indices you use during the pre-training or zero-shot phases.
In the Colab tut…