issues
search
ml-explore
/
mlx-examples
Examples in the MLX framework
MIT License
6.3k
stars
898
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Fix bug in FluxSampler.timesteps method
#1131
hehua2008
opened
18 hours ago
0
Add mentions of MLX-my-repo.
#1129
Vaibhavs10
opened
2 days ago
0
Add olmo2
#1128
awni
opened
4 days ago
1
Support for OLMo2
#1127
mseri
opened
4 days ago
0
API Clarity: load_model vs load_models Module Structure
#1126
4rash-4
opened
4 days ago
1
Accept mx.array type for prompt argument for stream_generate
#1125
neilmehta24
closed
4 days ago
0
Memory leak in mlx_lm.server?
#1124
ivanfioravanti
closed
3 days ago
3
Add `last_segment_tokens` to `stream_generate` GenerationResponse
#1123
mattjcly
opened
6 days ago
0
Put prompt processing in same stream
#1122
awni
closed
6 days ago
0
docs: update stream_generate return type annotation
#1121
madroidmaq
closed
6 days ago
0
mlx_lm.generate --system-prompt doesn't convert `\n` as new line.
#1120
chigkim
closed
6 days ago
1
Fix object property value in mlx_lm.server chat completions response not matching OpenAI spec
#1119
kconner
closed
6 days ago
2
Fix: Allow converting whisper models from local paths
#1118
remixer-dec
closed
6 days ago
0
Allow loading from diffusers ckpt
#1117
angeloskath
opened
1 week ago
0
Use other flux-based model files?
#1116
puppyapple
opened
1 week ago
1
Fix format
#1115
angeloskath
closed
1 week ago
0
Add seed argument to stable diffusion image to image
#1114
louen
closed
1 week ago
0
Fix data_iter in prepare_dataset from speechcommands example
#1113
sakares
opened
1 week ago
2
ValueError: [dequantize] The matrix should be given as a uint32
#1111
chaihahaha
closed
1 week ago
4
Unable to load Qwen2-VL-1.5B-Instruct model using mlx_lm
#1110
dolphingarlic
closed
1 week ago
1
[Feature Request] Unable to Unload or Swap adapters at runtime
#1108
chimezie
opened
1 week ago
2
[MLX LM] Fix f-string formatting in memory warning message
#1105
skeetmtp
closed
2 weeks ago
0
Significantly reduced inference speed after Lora finetunig
#1104
hschaeufler
opened
2 weeks ago
1
Completion only fine-tuning of instruction models with collections of HF datasets
#1103
chimezie
opened
3 weeks ago
4
libc++abi: Unable to build metal library from source error: invalid value 'metal3.1' in '-std=metal3.1'
#1102
maxlund
closed
3 weeks ago
11
Whisper transcribing music sounds
#1101
aaa3334
opened
3 weeks ago
2
Tencent HunYuan MOE model
#1100
awni
closed
1 week ago
3
Generation refactor: part 2
#1099
awni
closed
1 week ago
0
Support for AI21 Jamba-1.5
#1097
vlbosch
opened
3 weeks ago
1
FLUX: Add support for setup configuration to publish module
#1096
madroidmaq
opened
3 weeks ago
0
LoRA trains on tokens with duplicated BOS for some models
#1095
chimezie
closed
3 weeks ago
3
[MLX LM] Sampler refactor + a few improvements
#1094
awni
closed
3 weeks ago
1
Fix rotating kv cache size
#1093
angeloskath
closed
3 weeks ago
0
Fix spm decoder multi-byte
#1092
awni
closed
3 weeks ago
0
"Your computer was restarted because of a problem" after QLoRa on Mistral Nemo fused and (later) quantized checkpoint
#1091
chimezie
closed
3 weeks ago
9
Generalize HF datasets to a collection of HF datasets via `hf_datasets`
#1090
chimezie
closed
2 weeks ago
1
chore(mlx-lm): add max token arg for mlx_lm.chat
#1089
mzbac
closed
3 weeks ago
0
[Feature Request] Custom "chat" HF datasets
#1088
chimezie
opened
4 weeks ago
0
load_custom_hf_dataset not handling the text_feature argument properly
#1087
chimezie
opened
4 weeks ago
0
Custom local dataset features
#1085
chimezie
opened
4 weeks ago
0
[mlx-whisper] Error while transcribing - Cannot infer the shape of an empty array
#1084
thelazyoxymoron
closed
4 weeks ago
3
mlx-whisper Not installable on python 3.13
#1083
alper
closed
4 weeks ago
2
Fix whisper edge case when decoding past limit
#1082
awni
closed
4 weeks ago
0
Clear cache every now and then
#1081
awni
closed
1 month ago
0
Whisper improvements
#1080
awni
closed
1 month ago
0
Fix special tokens for deep seek
#1079
awni
closed
1 month ago
0
Add Max Token Limit for Generation
#1078
N8python
opened
1 month ago
29
Detokenizer crash with the mlx-lm 19.2 and DeepSeek-Coder-V2 EOS Token
#1077
mattjcly
closed
1 month ago
0
[BUG] memory during fine tuning grows until MacBook crashes
#1076
hschaeufler
closed
1 month ago
4
Quantized KV Cache
#1075
barronalex
closed
1 month ago
1
Next