issues
search
withcatai
/
node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
https://node-llama-cpp.withcat.ai
MIT License
893
stars
86
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
feat(minor): reference common classes on the `Llama` instance
#360
giladgd
closed
13 hours ago
1
docs: improvements
#357
giladgd
closed
1 day ago
1
In some models, continuous dialogue seems to be automatically triggered when using Chinese
#356
ckvv
closed
1 day ago
2
refactor: extract the advance features to an independent high-level library package
#354
snowyu
closed
4 days ago
3
Settings and how we can control them to the API
#352
tigert2173
closed
1 week ago
2
feat: `resolveModelFile` method
#351
giladgd
closed
1 week ago
1
fix: Llama3_1ChatWrapper types
#350
vlamanna
closed
1 week ago
1
I cannot build my Typescript project when using v3.0.0
#349
vlamanna
closed
1 week ago
2
docs: home page fixes
#346
giladgd
closed
1 week ago
1
build(docs): fix page time
#345
giladgd
closed
1 week ago
1
fix: adapt to `llama.cpp` breaking change
#344
giladgd
closed
1 week ago
1
fix: increase `semantic-release` retires for GitHub API ratelimit
#343
giladgd
closed
1 week ago
1
fix(node template): bug
#342
giladgd
closed
1 week ago
0
docs: history
#341
giladgd
closed
1 week ago
0
fix: use a compressed logo image for `README.md`
#340
giladgd
closed
1 week ago
0
fix: deploy docs website
#337
giladgd
closed
1 week ago
1
build: deploy docs website
#336
giladgd
closed
1 week ago
1
fix: release create-command package
#335
giladgd
closed
1 week ago
1
build: release create-command package
#334
giladgd
closed
1 week ago
1
build: fix release job
#332
giladgd
closed
1 week ago
1
feat: v3.0 stable release
#331
giladgd
closed
1 week ago
1
fix: improve model downloader CI logs
#329
giladgd
closed
2 weeks ago
2
`Getting started` is incorrect
#328
lukemovement
closed
2 weeks ago
0
feat: `resetChatHistory` function on a `LlamaChatSession`
#327
giladgd
closed
2 weeks ago
2
build: fix CI config
#326
giladgd
closed
2 weeks ago
2
build: fix CI config
#325
giladgd
closed
2 weeks ago
2
docs: improve documentation
#324
giladgd
closed
2 weeks ago
2
fix: revert `electron-builder` version used in Electron template
#323
giladgd
closed
2 weeks ago
2
fix: no thread limit when using a GPU
#322
giladgd
closed
2 weeks ago
2
docs: Update CUDA.md
#320
B3none
closed
2 weeks ago
1
build: fix CI config
#318
giladgd
closed
2 weeks ago
2
build: fix release bug
#317
giladgd
closed
2 weeks ago
2
build: fix release bug
#316
giladgd
closed
2 weeks ago
2
build: fix release bug
#315
giladgd
closed
2 weeks ago
2
build: fix release config
#314
giladgd
closed
2 weeks ago
2
build: fix CI config
#313
giladgd
closed
2 weeks ago
2
build: fix CI config
#312
giladgd
closed
2 weeks ago
2
Compiling LLAMA for cuda is single threaded
#311
B3none
closed
2 weeks ago
1
feat: new docs
#309
giladgd
closed
2 weeks ago
2
fix: update documentation website URL
#306
giladgd
closed
1 month ago
2
fix: bump `llama.cpp` release used in prebuilt binaries
#305
giladgd
closed
1 month ago
2
Error: vk::Queue::submit: ErrorDeviceLost
#304
billyma128
closed
2 weeks ago
2
Free VRAM programmatically instead with GC
#303
IfnotFr
closed
1 month ago
2
Installation with CUDA-support fails early with "npm ERR! canceled"
#301
itinance
closed
1 month ago
1
feat: support Functionary new chat format
#299
physimo
closed
2 weeks ago
7
fix: revert to the latest stable Metal `llama.cpp` release
#297
giladgd
closed
1 month ago
2
fix: more cases of unknown characters in generation streaming
#295
giladgd
closed
1 month ago
2
fix: unkown characters in generation streaming
#293
giladgd
closed
2 months ago
2
fix: adapt to `llama.cpp` breaking changes
#291
giladgd
closed
2 months ago
2
Cuda Support
#290
ShakeebAftab-KoyaDocu
closed
2 months ago
2
Next