-
I am looking for a state-of-the-art (SOTA) model for **text prompt** segmentation. Currently, I am aware of two choices: [Grounded-Segment-Anything](https://github.com/IDEA-Research/Grounded-Segment-A…
-
### Feature Description
I am asking users to enter some text and submit it and I am using vercel ai sdk to generate response from chatgpt to that text, but I also want to check if the text contains…
-
I want that the model segments based on an input string, e.g. "red cars".
It seems that this is not yet supported in this implementation.
If I find time, I could try to add this. But I need a star…
-
How to use it with text prompt? I noticed that there is only point and box prompt in the code.
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
After using `chat_engin = index.as_chat_engine()`, I discovered that `chat_engine`…
-
### Describe the issue
```python
from vllm import LLM, SamplingParams
from minference import MInference
prompts = [
"Hello, my name is",
"The president of the United States is",
…
-
### Bug Description
I have written a test code named `test_have_json.py`, with the following content:
```python
from llama_index.core.base.llms.types import ChatMessage, MessageRole
from llama_i…
-
Hi @yuvalkirstain,
Great work! How to only download text prompts? Thanks.
-
I notice referent tokens are interleaved in the output. Can multiple referent tokens appear in a single text prompt, such as "Describe the table and the chair ."?
-
### Your current environment
问题
### 🐛 Describe the bug
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
import torch
# Initialize the tokenizer
tokeniz…