-
It would be wonderful if the prompt for an image would automatically be saved into the images metadata. With popular and much liked images, a hugely repetitive ask is people requesting what prompt was…
-
### 请提出你的问题
我使用了ppdiffusers/examples/text_to_image_laion400m/中的训练脚本,训练命令即该文件夹下的单机多卡命令。其中修改了batchsize为4,机子是4张4090,batchsize为4的时候单张显存消耗已经达到14g以上。这真的能在40g的机子上16的batchsize跑起来?我使用了fp16后,显存消耗并没有明显变化
-
I am getting an `pickle.UnpicklingError` when trying to train again on a previously trained checkpoint with open_clip `v2.27.0+`.
This is similar to https://github.com/mlfoundations/open_clip/issue…
-
Can you report the md5 value for 32 `.parquet` files?
To verify that all meta data is downloaded successfully.
-
Hi @LukasMut
https://github.com/ViCCo-Group/thingsvision/blob/ad903f383a8e647057010bd1618b0b8770e8f3a7/thingsvision/utils/alignment/transforms.py#L14
I wanted to ask if the URL is still valid?
…
-
options:
* https://huggingface.co/docs/datasets/add_dataset.html
* https://www.kaggle.com/datasets
* https://github.com/activeloopai/Hub
Those should make it easy to distribute the dataset of ur…
-
There is an issue dumping results for all the tasks/subsets of sugarcrepe output json, no?
It runs over all the split but only retain results for `sugar_crepe/swap_obj`
```
$ clip_benchmark eva…
-
Since you have the ViT-B/16 aesthetic model, was just wondering whether you would have the ViT-B/16 pre-computed embeddings too
-
Hi, Rom. I have downloaded laion400m and launched KnnService with follow arguments:
```python
indices_paths="indices_paths_ViTL14.json"
clip_model="ViT-L/14"
enable_hdf5=False
ena…
-
Hi,
Thanks so much for providing such impactful works (LAION, Open CLIP, Open CLIP benchmark) to the community!
I noticed there might be a potential use of the mis-ordered KITTI prompt from https:/…