issues
search
mdrokz
/
rust-llama.cpp
LLama.cpp rust bindings
https://crates.io/crates/llama_cpp_rs/
MIT License
287
stars
42
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Removed count println from predict(..) as per #28
#43
BenjaminMassey
closed
1 week ago
0
fix build.rs to build under macOS
#42
gmerlino
closed
1 week ago
2
Fix CUDA on Windows
#41
glcraft
closed
2 months ago
2
Compiling with metal feature has `ggml-metal.o` linker failure
#40
tc-wolf
opened
4 months ago
3
Fix compilation w/ metal feature enabled
#39
tc-wolf
opened
4 months ago
7
Bug cannot build correct in macos with m2 chip
#38
bruceunx
opened
4 months ago
0
Maintance and improvements
#37
philschmid
opened
4 months ago
6
Update-llamacpp
#36
casualjim
closed
4 months ago
0
fix missized vec
#35
kallsyms
closed
5 months ago
1
chore(deps): bump shlex from 1.1.0 to 1.3.0
#34
dependabot[bot]
opened
5 months ago
0
feat: patch the metal shader source code into the llama.cpp code at build time, so it is not needed to include the file in the cwd of the binary
#33
pixelspark
closed
5 months ago
0
Include ggml-metal.metal file in source code
#32
pixelspark
closed
5 months ago
1
Support for GBNF Grammars
#31
benbot
opened
6 months ago
2
Using metal and `n_gpu_layers` produces no tokens
#30
jasonw247
opened
6 months ago
5
Error running Phi2 Models
#29
jasonw247
opened
6 months ago
0
remove or add a way to disable `println!("count {}", reverse_count);`
#28
alescdb
opened
6 months ago
1
upgrade llama.cpp to b1742
#27
bedwards
closed
6 months ago
0
clang - fatal error: 'assert.h' file not found
#26
RussellCanfield
opened
6 months ago
2
upgraded llama.cpp to b1575
#25
janeisklar
closed
7 months ago
0
Error when enabling CUDA on Windows
#24
Kuinox
closed
2 months ago
12
`LLama` is not `Send`
#23
tadad
closed
7 months ago
4
Sometimes crashes with UTF8 error
#22
kimtore
opened
7 months ago
3
Multithreading support
#21
markcda
closed
7 months ago
0
Fixed gpt_params::~gpt_params double memory deallocation
#20
markcda
closed
8 months ago
0
Slow Performance compared to Python Binding
#19
JewishLewish
closed
8 months ago
1
Feature flag metal: Fails to load model when n_gpu_layers > 0
#18
phudtran
opened
8 months ago
8
llama.cpp ./embedding
#17
Philipp-Sc
opened
8 months ago
4
Error in loading models
#16
RodrigoSdeCarvalho
closed
8 months ago
5
Cant build on Ubuntu 22.04
#15
RodrigoSdeCarvalho
closed
8 months ago
1
Add explicit C and C++ versions to compile calls
#14
jmickeyd
closed
8 months ago
0
Update to latest llama.cpp to allow usage of LLaMAv2 and GGUF models
#13
kouta-kun
closed
8 months ago
0
Fix windows build
#12
mdrokz
closed
9 months ago
0
Cant build on Mac aarch64
#11
Scaletta
closed
8 months ago
0
Cant compile on Win64
#10
Scaletta
closed
9 months ago
2
Metal support for macos
#9
mdrokz
closed
9 months ago
0
fix bad examples link
#8
dmitris
closed
9 months ago
0
Implement blas support
#7
mdrokz
closed
10 months ago
0
fix: change to proper commit
#6
mdrokz
closed
10 months ago
0
Not cloning llama.cpp submodule
#5
PainSled
closed
10 months ago
2
Promote to master
#4
mdrokz
closed
12 months ago
0
Transform project as library for publishing to crates.io
#3
mdrokz
closed
12 months ago
0
Implement build system for building llama.cpp
#2
mdrokz
closed
1 year ago
0
Implement options mod
#1
mdrokz
closed
1 year ago
0