issues
search
LLukas22
/
llm-rs-python
Unofficial python bindings for the rust llm library. 🐍❤️🦀
MIT License
71
stars
4
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Will you please guide how to run the conversion script?
#36
AayushSameerShah
opened
9 months ago
1
Need help for converting to rust
#35
andri-jpg
closed
9 months ago
0
thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: InvalidMagic { path: "model_merak.bin" }', src/model.rs:47:12
#34
widyaputeriaulia10
opened
10 months ago
3
How much RAM needed to convert gpt2 13b model to ggml using your Manual convert function?
#33
JohnClaw
opened
10 months ago
1
Is streaming supported with langchain AsyncIteratorCallbackHandler?
#32
AdrianLsk
opened
10 months ago
1
Custom RoPE scaling and new sampler backend
#31
LLukas22
closed
11 months ago
0
use pydantic.v1 in langchain.py
#30
andri-jpg
closed
11 months ago
8
Deprecated Usage of `@root_validator` Causing Error
#29
andri-jpg
closed
11 months ago
2
Add llm-rs-python to haystack-integrations
#28
anakin87
opened
11 months ago
7
GPU Not Utilized When Using llm-rs with CUDA Version
#27
andri-jpg
opened
12 months ago
2
Stabilize GPU support
#26
LLukas22
closed
12 months ago
0
Moving quantized mode with bin and meta file to new machine doesn't works
#25
sidharthiimc
closed
1 year ago
3
Cublas/CLBlast/Metal Support
#24
LLukas22
closed
1 year ago
0
Add Haystack integration
#23
LLukas22
closed
1 year ago
0
Added mapping for "LlamaForCausalLM"
#22
LLukas22
closed
1 year ago
0
Add LangChain support
#21
LLukas22
closed
1 year ago
0
Re-enable macos build
#20
LLukas22
closed
1 year ago
0
GPU support - Feature Request
#19
sidharthiimc
closed
1 year ago
1
Feature Request: Falcon 7B support
#18
sidharthiimc
opened
1 year ago
1
Added HuggingFace Tokenizers support
#17
LLukas22
closed
1 year ago
0
Auto-Quantization get's stuck in IPython environment
#16
LLukas22
closed
1 year ago
0
4 bit quantization not happening - code getting stuck -
#15
sidharthiimc
closed
1 year ago
4
How to convert LoRA adapters for using?
#14
sidharthiimc
closed
1 year ago
2
Added new quantization formats
#13
LLukas22
closed
1 year ago
0
Add streaming support
#12
LLukas22
closed
1 year ago
0
Update GGML quantization format
#11
LLukas22
closed
1 year ago
0
Add Huggingface Hub integrations
#10
LLukas22
closed
1 year ago
0
Added AutoConversion\AutoQuantizer\AutoModel
#9
LLukas22
closed
1 year ago
0
Added quantization
#8
LLukas22
closed
1 year ago
0
Update Branch
#7
LLukas22
closed
1 year ago
0
Add Mpt Support
#6
LLukas22
closed
1 year ago
0
Added Model documentation
#5
LLukas22
closed
1 year ago
0
Added Documentation
#4
LLukas22
closed
1 year ago
0
Added LoRA support
#3
LLukas22
closed
1 year ago
0
Release GIL when generating
#2
LLukas22
closed
1 year ago
0
Moved to llm-rs
#1
LLukas22
closed
1 year ago
0