withcatai / node-llama-cpp

Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
https://withcatai.github.io/node-llama-cpp/
MIT License
760 stars 65 forks source link

feat: gguf parser #168

Closed ido-pluto closed 4 months ago

ido-pluto commented 5 months ago

Description of change

Add the ability to parse GGUF metadata from a local/remote file quickly and get insights based on the metadata such as the amount of VRAM required to run this model and more...

Pull-Request Checklist

github-actions[bot] commented 4 months ago

:tada: This PR is included in version 3.0.0-beta.13 :tada:

The release is available on:

Your semantic-release bot :package::rocket: