issues
search
withcatai
/
node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
https://withcatai.github.io/node-llama-cpp/
MIT License
729
stars
62
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
feat: flash attention
#264
giladgd
closed
6 hours ago
1
Error: Conversation roles must alternate user/assistant/user/assistant/..
#263
AliAzlanAziz
opened
1 day ago
1
Receiving error when compiling for cuda
#262
AliAzlanAziz
closed
2 days ago
2
Problem when running some models with cuda
#261
bqhuyy
opened
3 days ago
3
fix: macOS prebuilt binaries
#259
giladgd
closed
5 days ago
1
fix: Linux prebuilt binaries
#257
giladgd
closed
5 days ago
1
fix: Windows build
#255
giladgd
closed
5 days ago
1
fix: Windows and CUDA bindings
#254
giladgd
closed
5 days ago
1
build: fix release job
#252
giladgd
closed
6 days ago
1
illegal hardware instruction on M2
#251
RobertAron
closed
6 days ago
4
feat: move CUDA prebuilt binaries to dependency modules
#250
giladgd
closed
6 days ago
1
fix: long `LlamaText` tokenization
#249
giladgd
closed
1 week ago
1
fix: bump llama.cpp release used in prebuilt binaries
#247
giladgd
closed
2 weeks ago
1
fix: preload error on `chat` command
#245
giladgd
closed
2 weeks ago
1
fix: remove CUDA binary compression for Windows
#243
giladgd
closed
2 weeks ago
1
fix: bugs
#241
giladgd
closed
2 weeks ago
1
fix: remove CUDA binary compression for now
#238
giladgd
closed
2 weeks ago
1
feat: compress CUDA prebuilt binaries
#236
giladgd
closed
2 weeks ago
1
feat: render markdown in the Electron example
#234
giladgd
closed
3 weeks ago
1
fix: async gpu info getters
#232
giladgd
closed
3 weeks ago
1
fix: Electron example build
#230
giladgd
closed
3 weeks ago
1
fix: Electron example build
#228
giladgd
closed
3 weeks ago
1
feat: improve loading status in the Electron example
#226
giladgd
closed
3 weeks ago
1
feat: parallel function calling
#225
giladgd
closed
3 weeks ago
1
fix: bump `llama.cpp` release used in prebuilt binaries
#223
giladgd
closed
1 month ago
1
fix: templates bugs
#221
giladgd
closed
1 month ago
1
fix: include templates in npm package
#219
giladgd
closed
1 month ago
1
feat: `init` command to scaffold a new project from a template
#217
giladgd
closed
1 month ago
1
feat: improve grammar support
#215
giladgd
closed
1 month ago
0
feat: split gguf files support
#214
giladgd
closed
1 month ago
1
Response streaming in 3.0.0 beta version
#213
Reyons227
closed
2 months ago
2
Loading Llama3 in Electron
#212
bitterspeed
closed
1 month ago
2
LlamaCpp crash when embedding (in beta)
#211
vodkaslime
closed
1 month ago
11
Integrate TS compiler to parse types to grammar
#210
jazelly
closed
2 months ago
1
fix: adapt to `llama.cpp` changes
#208
giladgd
closed
2 months ago
1
Support for Llama 3
#207
clvnthe04
closed
2 months ago
1
Need help, Can't get CUDA support to work
#206
robegamesios
closed
2 months ago
2
feat: Llama 3 support
#205
giladgd
closed
2 months ago
1
Function call error
#204
christianh104
closed
2 months ago
2
feat(`inspect gpu` command): print env info
#202
giladgd
closed
2 months ago
1
Cannot instantiate new LlamaModel bc class constructor was changed to private in beta
#200
convertsee-dev
closed
2 months ago
3
Error: ENOENT: no such file or directory, open undefinedbinariesGithubRelease.json
#199
linonetwo
closed
2 months ago
9
feat(`inspect gpu` command): print device names
#198
giladgd
closed
2 months ago
1
fix: fallback to general chat wrapper
#197
giladgd
closed
2 months ago
1
feat: token biases
#196
giladgd
closed
2 months ago
1
feat: download models using the CLI
#191
giladgd
closed
2 months ago
1
kv slot none
#190
kelvinwop
closed
2 months ago
2
fix: create a context with no parameters
#188
giladgd
closed
3 months ago
1
Inconsistent tokenization/encoding
#186
StrangeBytesDev
closed
3 months ago
3
fix: adapt to breaking `llama.cpp` changes
#183
giladgd
closed
3 months ago
2
Next