-
As described, trying to spin up a `mistralai/Mistral-7B-v0.1` using the examples in the README. This is on an EC2 `g5.xlarge`.
```
import mii
client = mii.serve("mistralai/Mistral-7B-v0.1")
resp…
-
Extend Anthropic connector to support function calling
-
### What would you like to see?
it would be great to be able to configure AnythingLLM with a Vllm model
https://github.com/vllm-project/vllm
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a sim…
-
I keep getting this error after adding LLAMA-CPP inference endpoint locally. Adding this line causes this error.
```
"endpoints": [
{
"url": "http://localhost:8080",
…
-
Here's the log output:
```
[1] Server error:
[1] l-00002-of-00003.safetensors: 100%|██████████| 5.00G/5.00G [07:48
-
Hello,
First of all, thank you so much @InsightEdge01 for your work and your YT channel. Your projects are so interesting and look promising :)
I have a question: can I use "Question-AnswerPairGen…
-
Hello
I am running in the following machine.
CPU: 12th Gen Intel(R) Core(TM) i7-12700
RAM: 32GB, speed: 4400MT/s
NVIDIA RTX A2000 12GB
model is:
llama-2-7b-chat.Q6_K.gguf
And it takes a…
-
I have set:
```
(setopt ellama-provider
(make-llm-ollama
:chat-model "deepseek-coder:6.7b-base-q8_0"
:embedding-model "deepseek-coder:6.7b-base-q8_0"
))
```
But ellama seems to b…
-
### What happened?
Simple example:
```
import litellm
litellm.set_verbose = True
if __name__ == "__main__":
messages = [{"role": "user", "content": "Hello!"}]
base_url = "http://loc…