-
### Your current environment
```text
The output of `python collect_env.py`
Collecting environment information...
WARNING 06-17 14:57:49 ray_utils.py:46] Failed to import Ray with ModuleNotFoundE…
-
### systemRole
StableDiffusion is a deep learning text-to-image model that generates new images based on input prompts, specifying elements to include or omit.
Here, I introduce the concept of Pro…
-
I am still confused how to use the prompt within `Mistral-7b-instruct` if I want to analyze the content of a text, such as summary or categorization of the context.
So in my prompt I have a text and …
-
I am trying to integrate Nemo Guardrails with our hosted LLM Model. For that, I have created a Custom LLM Class.
```py
class CustomLLM(BaseLanguageModel):
async def…
-
### Your current environment
The ray version is 2.10.0 and vllm version is 0.5.0+cu117
### 🐛 Describe the bug
Using tp=2 as code listed below:
```python
from vllm import LLM, SamplingParams
…
-
### Description of the issue
when i try to name a character in a few games, the input text prompt don't move the cursor forward when i press a key on the keyborad.
### Reproduction steps
I record a…
-
各位老师,我在执行右键菜单 text-to-text对 prompt 词补全时出现如下错误,且无反应。求各位指教。
start_local_llm error
/mixlab/folder_paths False 'llamafile'
start_local_llm error
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a sim…
-
Is there a way to support pipelines with CPU offloading enabled?
It seems currently unable to handle this condition
```python
import gc
import torch
from diffusers import StableDiffusion3Pipe…
-
I use different models for different purpose. I realized that I would like to be able to switch quickly the "system prompt" for a model.
For example use one prompt for "Java Programming" and another …