kaust-generative-ai / local-deployment-llama-cpp

Project to help you get started working with LLMs locally with LLaMA C++.
Apache License 2.0
1 stars 1 forks source link

Configure the cache directory used by LLaMA C++ #10

Closed davidrpugh closed 4 weeks ago

davidrpugh commented 1 month ago

Need to specify the cache directory so that LLaMA C++ caches models inside the project directory (and not some arbitrary place in the user's home directory. Maybe this requires setting some environment variable?

davidrpugh commented 4 weeks ago

Closed by #29.