-
Hi,great work!But when i try to take a look at the shape of last_hidden_state,i encounter some problems.The codes are the same as official document.And that is
from datasets import load_dataset
fro…
-
Hi, very great work!
I'm trying to reproduce your training results.
I trained with batchsize of 124 for 2W steps. But I found the AP on COCO is pretty low, as shown in figure below:
![image](https:…
-
Can anyone provide a toy dataset that I can do a toy training?
The dataset part is driving me crazy.....
-
**Describe**
Model I am using (TextDiffuser):
The download link of the evaluation dataset MARIOEval is invalid, hoping to update the download link
-
Hi, I attempt to run search_clip_images.py either in your docker container or out and I get the same error like nothing is going into image_url.txt file:
python3 search_clip_images.py \
"data/te…
-
in line 292 Ultrapixel_controlnet 0 True
Downloading shards: 0%| | 0/2 [00:00
-
### Question
Great job! I found there are two versions pretrain datasets: blip_laion_cc_sbu_558k and LLaVA-CC3M-Pretrain-595K. I'd like to know what are the differences between them and which one is …
-
### team
SWANN
### corresponding
flopper123
### tasks
Task C
### subsets and projections
LAION-2B: 1024-bit binary sketches (hamming)
### members
Christoffer Jakob Woldbye Romild
Joachim Al…
-
Awesome work!
However, I somehow encountered the difficulty reproducing the performance of the autoencoder by training-from-scratch. Especially with the addition of GAN loss, getting collapsed recons…
-
Hi Christoph,
Thanks for releasing this code. I just noticed that the MLPs have no activations in them, making them in some sense equivalent to a single linear layer. Is this an intentional choice?…
sxyu updated
10 months ago