-
**Describe the bug**
After following the installation instructions for Mac (Apple Metal) on the 'Getting Started' I try to do SDG and see this:
```
$ ilab data generate
INFO 2024-08-22 13:45:45,…
-
Open to suggestions / assistance on how to make installation easier and less error prone.
One thought is to add better platform detection to the cmakelists and provide better docs / links if requir…
-
llama_model_loader: loaded meta data with 32 key-value pairs and 219 tensors from /data/huggingface/hub/models--city96--t5-v1_1-xxl-encoder-gguf/snapshots/005a6ea51a7d0b84d677b3e633bb52a8c85a83d9/./t5…
-
**Is your feature request related to a problem? Please describe.**
I am using llama-cpp-python in some projects and the difference in build time between using and not using llama-cpp-python is 15-20 …
-
**LocalAI version:**
container image: AIO Cuda12-latest
**Environment, CPU architecture, OS, and Version:**
VM ubuntu 22.04 latest
nvidia 2600
**Describe the bug**
get memory issue while swi…
-
**Is your feature request related to a problem? Please describe.**
It would be nice to intergrate https://llama-cpp-python.readthedocs.io/en/stable/#embeddings because of the speed of default `senten…
-
This epic is a major tracker for all the backends additions that should be part of LocalAI v2 and ongoing efforts.
The objective is to release a v2 which deprecates old models which are now superse…
-
### What is the issue?
`(.venv) [root@bastion ollama]# python llm/llama.cpp/convert-hf-to-gguf.py ./model --outtype f16 --outfile converted.bin
INFO:hf-to-gguf:Loading model: model
INFO:gguf.gguf_…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [x] I am running the latest code. Development is very rapid so there are no tagged versions as of…
-
# Expected Behavior
Pass the oneMKL flags to CMAKE_ARGS and installing llama-cpp-python via pip should finish successfully as the flags are supported by llama.cpp:
https://github.com/ggerganov/llama…