-
Using the command `$ CC="/opt/rocm/llvm/bin/clang" CXX="/opt/rocm/llvm/bin/clang++" CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers` I am unable to compile ctransformers for ROCM. I'v…
-
I am trying to execute the following script:
1. from llama_cpp import Llama
2. llm = Llama(model_path="~/llama-2-7b.ggmlv3.q8_0.bin", n_gqa=8)
3. output = llm("Q: Name the planets in the solar sy…
-
Hello, I compile this project to wasm, but when I run it crash.
![image](https://github.com/leejet/stable-diffusion.cpp/assets/1291945/91346f00-230a-4eed-84d4-e09c2fff0144)
-
### What happened?
Getting a consistent `missing tensor blk.0.ffn_down_exps.weight` error when trying to load mixtral-8x7b-instruct-v0.1.Q5_K_M.gguf (from https://huggingface.co/TheBloke/Mixtral-8…
-
Error is:
```
ggml_metal_init: load pipeline error: Error Domain=AGXMetalA12 Code=3 "Encountered unlowered function call to air.simd_max.f32" UserInfo={NSLocalizedDescription=Encountered unlowered…
-
Hi,
not really an issue but more a curiosity,
instead of `sumf += sum[0] + sum[1] + sum[2] + sum[3] + sum[4] + sum[5] + sum[6] + sum[7];`
did you try for A64 something similar to this?
`sumf = …
-
I get the following error when trying to generate the large-v3 quantized coreml model:
```
$ ./models/generate-coreml-model.sh large-v3-q5_0
scikit-learn version 1.3.0 is not supported. Minimum req…
-
Hi,
I am trying to run this in google Colab with T4 GPU that has 12GB RAM and 15 GB GPU RAM. However, when i run the below command it returns the following error :
> !python run_localGPT.py --d…
-
Hi,
When i press "transcribe", it get this error:
```
Initializing Whisper...
whisper_model_load: invalid model file '/Users/paulo/Library/Developer/CoreSimulator/Devices/C6B95CAF-36C2-4A5B-98…
-
I tried most models that are coming in the recent days and this is the best one to run locally, fater than gpt4all and way more accurate.
[ggml-vicuna-7b-4bit-rev1.bin](https://huggingface.co/eachade…