-
- [LMStudio](https://lmstudio.ai/)
- [Continue](https://continue.dev/docs/intro)
-
I am using dolphin-2.2.1-mistral-7b.Q8_0.gguf with airoboros grammar wrapper. Despite setting the temperature to 0.01 in webui, and specifying use only data from archive memory, when trying to find an…
-
### Search before asking
- [X] I had searched in the [issues](https://github.com/HamaWhiteGG/autogen4j/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
There i…
-
### System Info
macOS 14.4.1
python 3.11.7
pandasai 2.0.35
### 🐛 Describe the bug
I'm unable to run pandasai with Ollama (llama3) locally, as I run into the following error per logs:
```
2024-0…
-
related - https://github.com/microsoft/guidance/issues/328
```python
import gradio as gr
import guidance
import torch
from server.model import load_model_main
from server.tools import load_too…
-
### What I've done:
1. conda create -n gptq python=3.9 -y
2. conda activate gptq
3. conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
4. git clone https://githu…
-
Suggestion: Adding openai API compatibility would be much appreciated.
E.g
localhost:port/v1
/models
Etc
open it up to connecting to the powerhouses: llamacpp, localllm and the rest …
-
### What do you need?
this is awesome project, but it needs ollama support.
the OpenAI api is the ease way out.
please add support for local LLMs too.
thank you
-
sudo docker run --rm --add-host host.docker.internal=host-gateway -e LLM_API_KEY="ollama" -e LLM_BASE_URL="http://host.docker.internal:11434" -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE -v $WORK…
suoko updated
1 month ago
-
# Trending repositories for C#
1. [**AvaloniaUI / Avalonia**](https://github.com/AvaloniaUI/Avalonia)
__Develop Desktop, Embedded, Mobile and WebAssembly apps with C# and XAML. Th…