withcatai / node-llama-cpp

Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
https://node-llama-cpp.withcat.ai
MIT License
894 stars 86 forks source link

docs: Update CUDA.md #320

Closed B3none closed 2 weeks ago

B3none commented 2 weeks ago

Description of change

Updated documentation for CUDA.md

giladgd commented 2 weeks ago

Thanks for the PR! I've completely rewritten the entire documentation and mentioned the slow compilation of CUDA in the new CUDA guide. I plan to release it in the next few days, so I prefer to avoid updating the old documentation until then to prevent more merge conflicts.