issues
search
second-state
/
WasmEdge-WASINN-examples
Apache License 2.0
217
stars
35
forks
source link
issues
Most commented
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
[0.13.5] CUDA error 222
#57
katopz
closed
6 months ago
26
Updated docs related to openvino
#26
Mash707
closed
9 months ago
15
Unable to clear the context object knowledge
#69
niranjanakella
closed
4 months ago
12
[Example] Use new ggml backend with llama options support
#52
dm4
closed
6 months ago
12
loading failed: magic header not detected, Code
#113
njalan
closed
2 months ago
9
very slow and issues in ubuntu wsl with cuda
#67
eramax
closed
2 months ago
9
Error Running WasmEdge with llama2 Model: GGML_ASSERT Failure
#66
xISSAx
closed
4 months ago
9
free(): invalid pointer Aborted (core dumped) on arm64 arch linux.
#48
shahizat
closed
6 months ago
9
Add neural speed example
#135
grorge123
opened
3 weeks ago
7
Error on running example openvino-road-segmentation-adas
#134
nerdola-de-cartola
opened
3 weeks ago
7
OpenHermes-2.5-Mistral-7B-GPTQ always get [INST] <<SYS>>... for 1st question
#65
katopz
closed
6 months ago
7
Memory keeps increasing for each inference
#61
katopz
closed
6 months ago
7
[Examples] add Pytorch image demo
#9
gusye1234
closed
1 year ago
7
Issue running on ARM architecture
#55
niranjanakella
closed
6 months ago
6
unknown option: nn-preload
#39
katopz
closed
7 months ago
6
Add [Example] Create pytorch-yolo-image example
#35
Charles-Schleich
closed
3 months ago
6
Failing to install WasmEdge through Curl "302 Moved Temporarily"
#68
niranjanakella
closed
5 months ago
5
Stuck very long and then got meaningless output when running llama2 inference
#51
darthjaja6
closed
6 months ago
5
Replace unwrap() with expect() and provide error messages.
#46
LiyanJin
closed
3 months ago
5
Update `openvino` examples
#24
apepkuss
closed
10 months ago
5
[Example] use local image in all demos
#16
gusye1234
closed
1 year ago
5
Set metadata when building graph
#63
dm4
closed
5 months ago
4
[error] [WASI-NN] GGML backend: Error: prompt too long (570 tokens, max 508)
#50
niranjanakella
closed
7 months ago
4
docs: Add note for install errors and solutions
#41
katopz
closed
7 months ago
4
[Examples] Update the wasi-nn crate dependency to 0.4.0
#23
yanghaku
closed
10 months ago
4
Does docker support WASI-NN with PyTorch Backend?
#20
warpmatrix
closed
10 months ago
4
terminate called after throwing an instance of 'torch::jit::ErrorReport' terminate called recursively Aborted (core dumped)
#128
lebron8dong
opened
1 month ago
3
qwen1_5-14b-chat-q5_k_m.gguf is not working
#116
njalan
closed
1 month ago
3
[Example] Support metadata of ggml output
#72
dm4
closed
5 months ago
3
[feat] New example `CodeLlama-13B-Instruct`
#54
apepkuss
closed
6 months ago
3
[feat] New example `Belle-Llama2-13B-GGUF`
#53
apepkuss
closed
6 months ago
3
Question: yolo use ggml as the backend,need help...
#140
jokemanfire
closed
1 week ago
2
Fix openvino-mobilenet-image example document
#139
15kubernetes
closed
1 week ago
2
[Example] ggml: add grammar example
#126
dm4
closed
1 month ago
2
The read bytes are not valid UTF-8: Error { kind: InvalidData, message: "stream did not contain valid UTF-8" }
#115
njalan
opened
2 months ago
2
very slow and issues in nvidia jetson
#112
Links17
closed
2 months ago
2
[Example] ggml: add multimodel example with CI
#111
dm4
closed
2 months ago
2
bug: Failed to run example in "wasmedge-ggml-llama-interative" with error "thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: BackendError(ContextFull)'"
#86
MaTwickenham
closed
3 months ago
2
[Example] ggml: fix error about empty stream-stdout
#85
dm4
closed
4 months ago
2
[Example] Add M3 Max to perf table
#73
katopz
closed
5 months ago
2
GGML_ASSERT: /Users/hydai/workspace/WasmEdge/plugins/wasi_nn/thirdparty/ggml/llama.cpp:5745: n_tokens <= n_batch
#70
niranjanakella
closed
5 months ago
2
[Example] Add llama streaming example
#49
dm4
closed
6 months ago
2
Better error handling and more robust output handling
#47
juntao
closed
7 months ago
2
[Question] How can I obtain inference output as streaming tokens?
#42
katopz
closed
7 months ago
2
ggml on macOS ARM64 support
#31
tudi2d
closed
7 months ago
2
Update openvino workflows
#30
apepkuss
closed
8 months ago
2
CI jobs for ggml llama run failed
#29
dm4
closed
8 months ago
2
[Examples] Add wasmedge-ggml-llama examples
#28
dm4
closed
8 months ago
2
Update the examples to 0.13.1 and above
#21
juntao
closed
3 months ago
2
[pytorch] The malformed wasm file
#13
apepkuss
closed
10 months ago
2
Next