-
### System Info
System Info
Privately hosted instance of TGI
Version: 2.2.0
Deployed as a standalone kserve predictor
Model: Mixtral-8x7b-instruct, also llama3-1-70b-instruct (the same prompt…
-
Sorry for a newb question, I don't find an answer. I succeeded in launching the server with unquantised Mistral7B:
```
python3 -m sglang.launch_server --model-path mistralai/Mistral-7B-Instruct-v0.2…
-
Just started hitting this hour ago or so. Model info from mistralai endpoint is bad.
```
from mistralai.client import MistralClient
api_key = ''
client = MistralClient(api_key=api_key)
client.…
-
are you going to add embbending end point ?
-
I updated Ollama from 0.1.16 to 0.1.18 and encountered the issue.
I am using python to use LLM models with Ollama and Langchain on Linux server(4 x A100 GPU).
There are 5,000 prompts to ask and get…
-
### How are you running AnythingLLM?
Docker (local)
### What happened?
After setting up a SQL Connector (MySQL) the agent answer there is no active connection to the database if I ask for the…
-
### Your current environment
docker image: vllm/vllm-openai:0.4.2
Model: https://huggingface.co/alpindale/c4ai-command-r-plus-GPTQ
GPUs: RTX8000 * 2
### 🐛 Describe the bug
The model works f…
-
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain.js documentation with the integrated search.
- [X] I used the GitHub search to find a …
-
Nazim, help me please! What's wrong? The model is successfully connected to the dialog base, I specify the correct api key, but when I ask in the sandbox - it gives an error : There was an error pro…