-
how to use text as prompt to segment
-
The Stable Diffusion community has a lot of examples that use weights in prompts. For example: `'Cat with !black nose! !!blue eyes!!'` should have a higher weighting on **black nose** and an even high…
-
### Your current environment
```text
The output of `python collect_env.py`
Collecting environment information...
WARNING 06-17 14:57:49 ray_utils.py:46] Failed to import Ray with ModuleNotFoundE…
-
### Describe the issue
code :
```python
# Copyright (c) 2024 Microsoft
# Licensed under The MIT License [see LICENSE for details]
from vllm import LLM, SamplingParams
from minference impor…
-
Hi, thank you for your work.
When do you complete the text option?
Thanks
-
### Your current environment
The ray version is 2.10.0 and vllm version is 0.5.0+cu117
### 🐛 Describe the bug
Using tp=2 as code listed below:
```python
from vllm import LLM, SamplingParams
…
-
Is there a way to support pipelines with CPU offloading enabled?
It seems currently unable to handle this condition
```python
import gc
import torch
from diffusers import StableDiffusion3Pipe…
-
I am looking for a state-of-the-art (SOTA) model for **text prompt** segmentation. Currently, I am aware of two choices: [Grounded-Segment-Anything](https://github.com/IDEA-Research/Grounded-Segment-A…
-
I use different models for different purpose. I realized that I would like to be able to switch quickly the "system prompt" for a model.
For example use one prompt for "Java Programming" and another …
-
I have a test application where the Prompt.Select is located in a loop and different actions are performed depending on the actual selection. If the actions also write messages to the console, the Pro…