issues
search
armbues
/
SiLLM
SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.
MIT License
221
stars
21
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
How to use Sillm to execute GPTQ or GGUF models
#14
White-Friday
closed
1 month ago
2
Regarding Support for Quantized Models
#13
White-Friday
closed
2 months ago
1
Change the project name
#12
rosmur
closed
3 months ago
1
I had an error when training Phi-3-mini-4k-instruct at DPO, so I fixed it locally.
#11
kawataki-yoshika
closed
5 months ago
2
[FeatureRequest] Add streaming support to openAI compatible api.
#10
s-kostyaev
opened
5 months ago
1
Dependency Conflict with Protobuf Version Requirement in sillm-mlx
#9
GusLovesMath
closed
2 months ago
2
Phi 3 4k , 128k mot working
#8
kishoretvk
closed
6 months ago
4
Installation Error
#7
AlgebraicAmin01
closed
6 months ago
1
Modifications to sillm.chat
#6
magnusviri
closed
2 months ago
3
slowness of sillm.chat on M2 Air with 16GB Ram
#5
kylewadegrove
closed
6 months ago
5
AttributeError: 'str' object has no attribute 'apply_chat_template'
#4
alew3
closed
6 months ago
4
Wrong parameters passed in dpo.py, generate
#3
ivanfioravanti
closed
6 months ago
4
[load_gguf] gguf_tensor_to_f16 failed
#2
DenisSergeevitch
closed
6 months ago
1
[Feature request] Add ORPO finetuning
#1
s-kostyaev
opened
6 months ago
2