-
This is a ticket to track a wishlist of items you wish LiteLLM had.
# **COMMENT BELOW 👇**
### With your request 🔥 - if we have any questions, we'll follow up in comments / via DMs
Respond …
-
### What is the issue?
### run hhao/openbmb-minicpm-llama3-v-2_5:fp16
msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000409 "
time=2024-05-29T2…
-
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Using conversation format: phi3
Special tokens have been added in the vocabulary…
-
### What happened?
We expected to see similar performance in llama.cpp when compared to ipex-llm. But llama.cpp was almost two times slower than ipex-llm given all the parameters were the same.
…
-
Hi,
Context:
- I'm experimenting with both `Ellma` and `gptel` for the moment, as I find nice stuff in both.
- Until recently, both `gptel` ans `Ellama` where working correctly
- Sometimes I ne…
-
### Feature request / 功能建议
None
### Motivation / 动机
None
### Your contribution / 您的贡献
None
-
### Describe the bug
- 我正在做一个聊天界面,增加个下拉菜单,但是这个下拉菜单显示的时候是被折叠的,文档写的“如果组件尚未在周围的块中渲染,则组件将以折叠形式显示在聊天机器人下方”,可能是我没理解过来,实在不知道怎么做,我只想要下拉菜单正常显示在布局里,而不是默认被折叠起来,请求帮助!!
- 然后是否可以将附加的下拉菜单组件排版在最上面?
### Have y…
-
Issue: Inference using v1.6 us giving single token output
Command:
```
python3 -m llava.serve.cli --model-path liuhaotian/llava-v1.6-mistral-7b --image-file "test_imag.png" --max-new-tokens 1…
-
Really new to this but i got a few error. I mention the bot than it try to reply but didnt reply. Here is the error
![image](https://github.com/jakobdylanc/discord-llm-chatbot/assets/89128767/7ff0d05…
-
Hi,
I'm probably missing something about how ollama works or am I facing a kind of side effect.
I use a super simple code to get 3 outputs given a simple input `Why is the sky blue?` (in order t…