withcatai / node-llama-cpp

Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
https://node-llama-cpp.withcat.ai
MIT License
829 stars 80 forks source link

docs: Update documentation on metal support via NODE_LLAMA_CPP_METAL env #108

Closed timothycarambat closed 9 months ago

timothycarambat commented 9 months ago

Description of change

Update the documentation of enabling or disabling metal support via the NODE_LLAMA_CPP_METAL environment variable. The current example only shows how to disable metal support when building llama.cpp bindings via the CLI command.

Pull-Request Checklist

giladgd commented 9 months ago

@timothycarambat This environment variable only affects the compilation of llama.cpp; setting it in runtime like the example you provided is not supported. There's already documentation for this environment variable here.

Thanks for the contribution anyway :)

giladgd commented 9 months ago

@timothycarambat node-llama-cpp builds llama.cpp at runtime only if no binaries are available. It looks to you that this code works because you run it locally on a dev environment that doesn't have any pre-built binaries. It won't work on a second run or on the module that's uploaded to npm