issues
search
cmp-nct
/
ggllm.cpp
Falcon LLM ggml framework with CPU and GPU support
Other
244
stars
21
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
#1 performance requirement
#83
cmp-nct
opened
1 year ago
0
falcon_quantize: fix tokenizer.json path issue #1
#82
maddes8cht
closed
1 year ago
0
Add support for AMX instructions (bf16 and/or int8)
#81
WilliamTambellini
opened
1 year ago
0
Update openbuddy's vocab size
#79
44670
closed
1 year ago
0
Metal support
#78
ghost
opened
1 year ago
2
Log the version of cuda that is being used
#77
WilliamTambellini
opened
1 year ago
0
linking error with static build
#76
WilliamTambellini
opened
1 year ago
2
Performance - heads up
#75
cmp-nct
opened
1 year ago
2
refactor gpt_params_parse, add validate_params function
#74
maddes8cht
opened
1 year ago
0
Enable pipe-friendly help output
#73
maddes8cht
closed
1 year ago
1
Would ggllm.cpp adopt the upcoming ggml fileformat (GGUF)?
#72
klosax
opened
1 year ago
1
--help , pipes and inconsistent help text
#71
maddes8cht
opened
1 year ago
7
Can't falcon_convert on OpenBuddy Falcon 7B model, KeyError [fixed]
#68
tak2hu
opened
1 year ago
1
Even in interactive mode, multiturn conversation is not possible.
#67
ehalit
opened
1 year ago
3
refactoring of gpt_params_parse with correct validateParams function
#66
maddes8cht
closed
1 year ago
6
16k+ context upgrade - Long-range Falcon
#65
cmp-nct
closed
1 year ago
7
Could we also get the Makefile updated to build a libfalcon.so
#64
linuxmagic-mp
opened
1 year ago
3
Upcoming PR - Pushing the Context limit to 8k+ for all existing Falcon models - Longrange Falcon flights
#62
cmp-nct
opened
1 year ago
5
Loading system prompt from file:
#61
maddes8cht
closed
1 year ago
1
Sys-msg-improvements
#60
maddes8cht
closed
1 year ago
0
Sys-msg-improvements
#59
maddes8cht
closed
1 year ago
0
Kv optimization - up to 3x performance on larger context
#58
cmp-nct
closed
1 year ago
0
CMakeLists refinements for CUBLAS and phthread
#57
luav
opened
1 year ago
2
Performance at high context (18k+)
#56
cmp-nct
opened
1 year ago
4
Debug Timings No Longer Working
#55
boricuapab
closed
1 year ago
2
Instruct Mode Issue
#54
boricuapab
closed
1 year ago
5
Split loader 1
#53
cmp-nct
closed
1 year ago
0
Is there any GUI or Web UI for ggllm.cpp?
#52
JohnClaw
opened
1 year ago
13
Can you migrate the new server from llama.cpp to here?
#51
maddes8cht
opened
1 year ago
3
fixes pull 48
#50
cmp-nct
closed
1 year ago
0
Apple Silicon Unable To Build
#49
only-cliches
closed
1 year ago
6
This commit changes the naming conventioins of the shared object file…
#48
sirajperson
closed
1 year ago
4
Implement ChatGLM.cpp
#47
iHaagcom
closed
1 year ago
0
Memory optimizations
#44
cmp-nct
closed
1 year ago
0
release builds should have other names than "llama-master-codestring.zip"
#43
maddes8cht
closed
1 year ago
2
Does the prompt cache work? I got an alignment assert when I turned it on.
#42
johnburkey
opened
1 year ago
19
OpenBLAS and CLBlast support
#41
Fr0d0Beutl1n
opened
1 year ago
4
falcon-main.exe Exits Unexpectedly after 'Numa' Commit
#40
maddes8cht
closed
1 year ago
3
Fix Makefile
#39
bryfry
closed
1 year ago
1
Tokenizer ggcc
#38
cmp-nct
closed
1 year ago
0
Steps forward - Tokenizer
#37
cmp-nct
opened
1 year ago
14
Tokenenizer fix 1
#35
cmp-nct
closed
1 year ago
0
K Quant 64 support - quite a feat to integrate
#34
cmp-nct
opened
1 year ago
4
Loss of context in batch prompt processing - once again
#33
cmp-nct
closed
1 year ago
2
Cuda performance broadcast
#32
cmp-nct
closed
1 year ago
2
Mul_mat Speedup??
#31
boricuapab
opened
1 year ago
12
CUDA mul_mat using cuBLAS for 3d multiplication fails on lm_head only for Falcon 7B
#30
cmp-nct
closed
1 year ago
2
Windows Installation Video Tutorial
#29
boricuapab
closed
1 year ago
1
Maintenance fixes 1
#28
cmp-nct
closed
1 year ago
0
Cuda performance improvement 1
#27
cmp-nct
closed
1 year ago
0
Next