-
### Contributing guidelines
- [X] I have read [CONTRIBUTING.md](https://github.com/echasnovski/mini.nvim/blob/main/CONTRIBUTING.md)
- [X] I have read [CODE_OF_CONDUCT.md](https://github.com/echasnovs…
-
**Describe the solution you'd like**
I would like to see streaming response in real time on `streamlit` when using `OpenAIChatGenerator`
**Describe alternatives you've considered**
I have tried t…
-
**Is your feature request related to a problem? Please describe.**
I'm using this tool on CPU only, and with certain prompts I get some kind of timeout.
This usually happens when I upload a document…
-
I started working on a GNOME extension to connect Ollama to GNOME. This is my first gnome extension and with my limited knowledge of java script I’ve run into some issues and cannot make any progress …
-
Currently ollama is supporting LLaVA, which is super great.
I wonder is there a chance to load other similar models like CogVLM?
https://github.com/THUDM/CogVLM
-
As LocalAI is enabling the use of AI for privacy-sensitive use-cases it would be great to have these abilities in OnlyOffice.
LocalAI can be used as an drop-in replacement for the OpenAI-API. For usi…
-
Adding support for local models (ex. through llama.cpp) would make this project even more impactful. Many local models, especially at high parameter counts, come pretty close to ChatGPT 3.5 Turbo, so …
-
Officially ROCm no longer supports these cards, but it looks like other projects have found workarounds. Let's explore if that's possible. Best case, built-in to our binaries. Fall-back if that's n…
-
Here is my code:
```
import typing as t
import asyncio
from typing import List
from datasets import load_dataset, load_from_disk
from ragas.metrics import faithfulness, context_recall, context…
-
- [ ] [Chat Circuit - Experimental UI for branching/forking conversations : r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/1ehilj4/chat_circuit_experimental_ui_for_branchingforking/)
# Ch…