-
Hi. I'm using my dataset instead of MNIST. My data originally is 188318 rows and 130 coloumns.
After done the pretraining and reconstruction step, the new dataset is eliminated 30 rows to 188288 rows…
-
Hi ShuangXieIrene:
Did you use MS COCO for pretraining before you trained the FSSD MobileNetV1 on VOC2007?
When I using VOC2007 and VOC2012 as training data for FSSD MobileNetV1, my perform…
-
JFYI: Pretraining for the 3B and 7B models is complete:
- https://huggingface.co/openlm-research/open_llama_3b
- https://huggingface.co/openlm-research/open_llama_7b
PS: Training for a 13B model …
-
Hi Yuan,
I have tested your AST mode pretrained on Audioset on my own dataset and I noticed that it achieves similar performance as EfficientNet pretrained on Audioset using psla pipeline.
I was won…
-
Hi, my name is Richard Zhang, a researcher based in the Engineering Department, University of Cambridge. Your method of curating a CiteSumm dataset for pretraining, and then achieving SoTA by a few-sh…
-
Hello, thank you very much for your contribution, I tried to run your example, but I met some problems. The example is as follows:
>python Main.py --dataset last-fm --pretrain -1
The error is as f…
-
Hello, I am doing the task of mae transfering to cifar, and I find that MAE is no better than supervised pre-trained vit on small data sets. Could you give me some advice?
-
Hi!
I noticed the new models and zero-shot tutorials. The results look interesting. One thing I can't figure out is what exactly do you mean by "continual pretrained" checkpoint.
The term evoke…
-
![image](https://user-images.githubusercontent.com/43124010/144537827-cb5c2cdf-26ca-4106-bccb-355b72c15804.png)
![image](https://user-images.githubusercontent.com/43124010/144537877-688be103-e1cc-4a1…
-
Author information is not updated yet
References are empty.
The paper contents are so interesting! And the review gave me a good summary.