Open LunasShade opened 1 month ago
Thanks for flagging this. Is there more to the error message?
If you want to run local models with an Nvidia GPU I would actually recommend using Ollama. All you need to do is install Ollama and then follow the configuration section of the wiki for setting Ollama up.
Sorry for keeping you waiting. I have now tried to install it without llama_cpp and just run cargo install lsp-ai -F cuda
. The output of the failed installation is so huge that it takes up the whole history and it is not complete. Do you have a page where I can paste it and send you the link?
Sorry for keeping you waiting. I have now tried to install it without llama_cpp and just run
cargo install lsp-ai -F cuda
. The output of the failed installation is so huge that it takes up the whole history and it is not complete. Do you have a page where I can paste it and send you the link?
try &> to a log, then send the file. or use a pastebin.
I put everything that was possible in this pastebin: https://pastebin.com/wUfRB16N
Hello, I used the method that is recommended for Linux users with Nvidia GPUs and the installation failed.
I also tried
set CXX "g++-11"
before I ran the command, and it failed anywayerror: failed to compile
lsp-ai v0.3.0
, intermediate artifacts can be found at/tmp/cargo-installIGWYNk
. To reuse those artifacts with a future compilation, set the environment variableCARGO_TARGET_DIR
to that path.Distro: Linux archlinux 6.9.6-1-cachyos-eevdf