-
```
model, image_processor, tokenizer = create_model_and_transforms(
clip_vision_encoder_path="ViT-L-14",
clip_vision_encoder_pretrained="openai",
lang_encoder_path=model_p…
-
Finetune Kandinsky-2 on ImageNet-1k so that we can generate images using our specific CLIP embeddings.
This way, we can visually check the effects of steering.
Kandinksy-2: https://github.com/…
-
### Your current environment
```text
vLLM Version: 0.5.4@4db5176d9758b720b05460c50ace3c01026eb158
```
### How would you like to use vllm
![image](https://github.com/user-attachments/assets/aa642e…
-
I stumbled into a crash from a failed assertion on current dev (b3b244df208131a9a931cc9b83527b504b461d66) while running the fuzzer a bit
```
thread '' panicked at /home/gekota/img-fuzz/zune-image/cr…
-
Our test suite is getting slower.
See stats over the past year (same thing for windows, macos has a discontunuity most likely due to an infrastructure change on the github side)
![Image](https://git…
-
Thank you for sharing the excellent code and checkpoints! I have run the code described in `Readme.md` and would like to determine whether I correctly understood them.
The current version of `dist…
-
I'm tried to convert model to coreML
firs tI tried it in terminal using this code
and generate two encoder and decoder files
```
import torch
from PIL import Image
from torchvision import tran…
X901 updated
4 months ago
-
```
stop reason = EXC_ARITHMETIC (code=EXC_I386_DIV, subcode=0x0)
frame #0: 0x00000001002caae7 libflif.0.dylib`FLIF_IMAGE::read_row_RGBA8(this=0x00000001006bbb00, row=0, buffer=0x0000000116dd60…
-
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f2.0.1v1.10.1-previous-335-g4751d664
Commit hash: 4751d6646d3889c4f0acb24d3b5a868a8208fb3a
Launching…
-
**Describe the bug**
I encountered a "Segmentation fault" while calling **stbi_load_gif_from_memory()** with the [animated gif](https://upload.wikimedia.org/wikipedia/commons/6/63/Wikipedia_logo_puzz…