-
Cannot import this from the library,
optimum-1.21.4
transformers-4.43.4
how to fix this issue the code i am running is from the quick start documentation of `Qwen/Qwen2-VL-2B-Instruct-GPTQ-Int4…
-
UI updates to AI Conductor
- [x] multi-modal inputs
- [x] json editor at task level and task roadmap level
- [x] add back run buttons / run all?
- [x] update chat ribbon so that it is not task (gener…
-
is there a (planned or existing) way to have variable IP Adapter weights for videos (e.g. with AnimateDiff)?
that means setting different values for different frames, as both scaling and masking cur…
-
```
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
from transformers import Qwen2VLProcessor
from awq.models.q…
-
### Your question
Is there a node for FLUX that supports TiledSampler? The ones I've tried don't work with UNet models, and this could help generate larger images on weak graphics cards.
I have a …
-
```
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor, AutoConfig
from qwen_vl_utils import process_vision_info
import torch
model_name = "Qwen/Qwen2-VL-7B-I…
-
### Your current environment
from PIL import Image
from transformers import AutoProcessor
from vllm import LLM, SamplingParams
from qwen_vl_utils import process_vision_info
MODEL_PATH = '/w…
-
sorry, i am fresh to image/ video generation, what's the encoder in the paper
-
thanks for your awesome work, do you have a plan to train a 10B or even larger model?
-
Use the HuggingFace API to run open source Video Models and integrate them to the Image/Video Generation Microservice
Ensure that there is proper video generation with the model that was chosen , …