-
Not sure if this RNN counts as a LLM, but if so would be nice to have it, let me know what needs to be done with packaging.
https://www.rwkv.com/
-
windows build faild while llama-cpp-py worksa
-
### Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue](ht…
-
### Contact Details
_No response_
### What happened?
Hi there, I have just attempted to run the new [Mistral-Nemo](https://mistral.ai/news/mistral-nemo/) with llamafile on a [gguf](https://huggingf…
-
**LocalAI version:**
```
v1.25.0-cublas-cuda12-ffmpeg
```
**Environment, CPU architecture, OS, and Version:**
```
# uname -a
Linux localai-ix-chart-f8bbbb7c7-x6xx9 6.1.42-production+truen…
-
Hey, thank you so much for the great model and this repo!
Would you be willing to add support for this chat format to llama-cpp-python, so that we can use function calling (and JSON mode) with thei…
-
### Describe the bug
I tried both the manual install and the one click install for Linux. My OS is a fresh install of Ubuntu 24.04. I've previously used this model on Windows 10 with text-generation-…
-
### What happened?
I fine-tuned the **InternLM2 7b-chat** model in **LLamaFactory** using a custom dataset and **lora**, exported the safetenors model and converted it to gguf format using `convert…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [ X] I am running the latest code. Development is very rapid so there are no tagged versions as o…
-
Llama.cpp recently added support for [Phi-2](https://huggingface.co/microsoft/phi-2) model (https://github.com/ggerganov/llama.cpp/pull/4490)
Since this is using llama.cpp, I've tried configuring t…