-
I've been using llama.cpp for quite a while (M1 Mac). Is there a way I can get ai_voicetalk_local.py to point to that installation instead of reinstalling it here? Sorry, newbie question...
-
**Describe the bug**
I'm noticing below error with our Tabby deployment, looks like a memory error. Don't have any additional logs, since we've modified the logs to mask input, output information, th…
-
It will likely that we will want to integrate llama.cpp (or one of its available rust bindings) to our stack. It will be important to have comparison benchmarks. The following is required
- [ ] Ben…
-
### Description:
I encountered an issue while trying to run my Flutter application using the `llama_cpp_dart` package. The error occurs when the app attempts to load the `libllama.so` dynamic library…
-
1. By using command `CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 python setup.py bdist_wheel`, I can build out a wheel and have it installed as:
```console
llama_cpp:
total 3.8M
-rwxrwxr-x 1 …
-
Will you consider supporting the llama.cpp server API for inference?
-
I don't able to use llama across threads. How can I wrap it in mutex? Can we avoid using lifetimes in https://github.com/utilityai/llama-cpp-rs/blob/071598cfb85cb419c4390580054488d8dc731ff7/llama-cpp-…
-
tjbck updated
2 months ago
-
**Details**:
I am using `llama.cpp` with GPU support for my projects. However, I found that the `/api/generate` endpoint, which might be expected by Copilot for Obsidian, is not supported by `llama.c…
-
Llama.cpp has updated the code `if (arg == "--lora") {
CHECK_ARG
params.lora_adapter.emplace_back(argv[i], 1.0f);
return true;
}` in common/common.cpp to `if (arg == "-…