issues
search
ml-explore
/
mlx-examples
Examples in the MLX framework
MIT License
6.3k
stars
898
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Getting TypeError: 'module' object is not subscriptable on generations with prompts over a certain size
#1022
chimezie
closed
1 month ago
0
T5 tokenizer decoding error with CodeT5+
#1021
zcbenz
opened
1 month ago
1
MusicGen
#1020
barronalex
closed
1 month ago
6
[BUG] generate hangs after multiple iterations
#1019
hschaeufler
closed
1 month ago
3
`mlx_lm.chat` command is not available after installing `mlx-lm`
#1018
gh640
closed
1 month ago
3
[BUG] Different Default-Values for Temperature
#1017
hschaeufler
closed
1 month ago
1
Return logprobs from generate api
#1016
farris
closed
1 month ago
2
More cache improvements
#1015
awni
closed
1 month ago
9
Fix rotating KV cache for chat use case
#1014
awni
closed
1 month ago
1
Make T5 work with official models without conversions
#1013
zcbenz
closed
1 month ago
2
mlx_whisper: add support for audio input from stdin
#1012
anthonywu
closed
3 weeks ago
2
libc++abi: terminating due to uncaught exception of type std::runtime_error: GPU FFT is only implemented for 1D, forward, complex FFTs.
#1011
CrispStrobe
closed
2 months ago
3
adding-support-for-mamba2
#1009
Goekdeniz-Guelmez
opened
2 months ago
8
Support for Nvidia Nemotron and NVLM 1.0
#1007
vlbosch
opened
2 months ago
1
Allow conversion of Whisper turbo models
#1006
awni
closed
2 months ago
0
Llama 3.2 1b model not generating response
#1005
mustangs0786
closed
2 months ago
3
repetiton_penalty and logits_bias just using logits_processors
#1004
nathanrchn
closed
2 months ago
2
Server: support function calling
#1003
madroidmaq
closed
2 months ago
5
EOS Token Warning
#1002
mike-schiller
closed
2 months ago
2
PR: Add KV-cache creation capability to mlx_lm.generate for after a text completion
#1001
mark-lord
closed
1 month ago
3
RotatingKVCache: Problem when reusing cache between multiple generations
#1000
zcbenz
closed
1 month ago
1
“tokenizer.ggml.unknown_token_id” is it definitly required?
#999
fynv
closed
2 months ago
2
Fix llava model when using text-only prompt
#998
zcbenz
closed
2 months ago
0
LoRA: Support HuggingFace dataset via data parameter
#996
madroidmaq
closed
2 months ago
0
LoRA: support tools(function calling) format datasets
#995
madroidmaq
closed
2 months ago
5
ModuleNotFoundError: No module named 'huggingface_hub.utils._errors'
#994
ndurner
closed
2 months ago
3
Fix export to gguf
#993
angeloskath
closed
2 months ago
0
Encodec
#991
awni
closed
2 months ago
1
[BUG] mlx_lm.fuse aborts with error message "AttributeError: 'bytes' object has no attribute 'encode'."
#992
hschaeufler
closed
2 months ago
3
Don't use private exception class (not in latest huggingface_hub)
#990
awni
closed
2 months ago
0
Enable caching for 'generate' and 'stream_generate' functions to ensure persistence of cache across multiple requests
#989
nath1295
closed
1 month ago
3
huggingface_hub 0.25 causes error upon importing from mlx_lm
#988
arogister
closed
2 months ago
1
Backward Hooks
#987
Aakriti23
closed
2 months ago
3
Adding an equivalent pytorch or tensorflow of the models for true benchmark
#986
thegodone
closed
1 month ago
1
Learning rate approaches warmup_init value
#985
hschaeufler
closed
2 months ago
2
Add /v1/models endpoint to mlx_lm.server
#984
jamesm131
closed
2 months ago
3
Add logits_processor option to generate_step function
#983
nathanrchn
closed
2 months ago
6
details on scale parameter for LoRA
#982
hschaeufler
closed
2 months ago
5
Fix bug in upload + docs nit
#981
awni
closed
2 months ago
0
I tried madlad400, but there is a problem with the output if it is float16
#980
otmb
opened
2 months ago
5
Fix the cache_prompt
#979
angeloskath
closed
2 months ago
0
[Feature Request] MLX_lm.cache_prompt | Save cached_prompt as plaintext in the kv-cache-file metadata
#978
mark-lord
opened
2 months ago
0
Possible bug with prompt cache
#977
awni
closed
2 months ago
3
is mlx-examples/llms/CONTRIBUTING.md still up-to-date?
#976
Jonathan-Dobson
closed
2 months ago
1
linting / type definitions
#975
Jonathan-Dobson
opened
2 months ago
0
Update LLM generation docs to use chat template
#973
awni
closed
2 months ago
2
using generate() yields unusable response with llama 3.1 instruct 4bit
#972
Jonathan-Dobson
closed
2 months ago
2
Fix attention layers map for SD-2-1-Base
#971
pranav4501
opened
3 months ago
2
[ Suggestion ] Add KV cache features to the Python API
#970
ibehnam
closed
1 month ago
1
Make sure to import the correct "version" module when installing mlx_whisper and mlx_lm from local source code.
#969
walkoncross
closed
2 months ago
7
Previous
Next