-
## Description
I am using the jupyter ai extension with a custom model provider as per steps in https://jupyter-ai.readthedocs.io/en/latest/users/index.html#custom-model-providers
However th…
-
> [!TIP]
> ## Want to get involved?
> We'd love it if you did! Please get in contact with the people assigned to this issue, or leave a comment. See general contributing advice [here](https://micros…
-
52, in _run_script
exec(code, module.__dict__)
File "C:\Users\mauri\Downloads\DocQA-main\DocQA-main\app.py", line 42, in
llm = LlamaCpp(model_path="./models/llama-7b.ggmlv3.q4_0.bin")
…
-
In order for us to enable users to run their finetuned models that are LlamaCPP compatible, we need to add this compatibility in our class structure.
-
### Cortex version
Jan v0.5.4
### Describe the Bug
https://discord.com/channels/1107178041848909847/1296496734901375146
Hi, when I try to use my AMD GPU, with vulkan, I get the failed to load mo…
-
It would be good if the KV Key cache type could be set in Ollama.
llama.cpp allows you to set the Key cache type which can improve memory usage as the KV store increases in size, especially when ru…
-
### Describe the bug
After an upgrade/clean install attempt to load model in "llama.cpp" in CPU-only configuration system fails with log message below. Restarting it doesn't help.
Note: possibly…
-
### What happened?
I am trying this library on Docker on mac and happened to see this error on test.
I knew this library is tested on Linux only, but I wonder is it possible to run on Docker image. …
-
**The bug**
I am attempting to run LLaVa through LLamacpp and I am getting incorrect responses
**To Reproduce**
```python
# Imports
import guidance
from guidance import image
from guidance im…
-
### Cortex version
cortex-1.0.0-linux-amd64-local-installer.deb
### Describe the Bug
Impossible to run model in docker.
### Steps to Reproduce
1. docker run -it debian:trixie /bin/bash
2. apt …