issues
search
arcee-ai
/
fastmlx
FastMLX is a high performance production ready API to host MLX models.
https://arcee-ai.github.io/fastmlx/
Other
307
stars
38
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Fix: update MLX imports for compatibility
#49
piyushbatra1999
opened
1 week ago
0
Update utils.py
#48
cmcmaster1
opened
2 months ago
0
Gemma3 12B support?
#47
yukiarimo
opened
2 months ago
0
Model Versioning
#46
Abhijit-without-h
opened
2 months ago
2
fix generation with make_sampler
#45
qnguyen3
closed
3 months ago
2
LM request.temperature fail with error: TypeError: generate_step() got an unexpected keyword argument 'temp'
#44
ami-navon
opened
3 months ago
2
FastMLX keeps crushing. Python 3.11/3.12, Nither work: tensorflow, torch, flax
#43
qdrddr
opened
4 months ago
0
AttributeError: 'str' object has no attribute 'get' and failed request with vision model
#42
HysMX
opened
5 months ago
4
Update utils.py
#41
jiyzhang
opened
6 months ago
4
Tool parsing and tool choice
#40
cmcmaster1
opened
6 months ago
4
Add Qwen function calling
#39
cmcmaster1
closed
2 months ago
2
Fixed default prompt template in config.json
#38
cmcmaster1
closed
6 months ago
0
Fix Docs Link
#37
Blaizzy
closed
6 months ago
0
Fix list models tests
#36
Blaizzy
opened
7 months ago
0
OpenWebUI doesn't connect to FastMLX
#35
SwagMuffinMcYoloPants
opened
8 months ago
5
/v1/models - format output to match OpenAI styling
#34
aricshow
closed
7 months ago
4
ReadMe Broken Docs Link
#33
larson-carter
closed
7 months ago
3
feat: open webui compliance
#32
viljark
closed
9 months ago
3
Future: add image generation prompts?
#31
stewartugelow
opened
9 months ago
1
Using OpenAI API compliant to support vision models
#30
madroidmaq
closed
8 months ago
5
Multiple workers do not share memory, which causes a full model reload for each message generation.
#29
ZachZimm
opened
9 months ago
5
Integrate mlx-hub like functionality?
#28
stewartugelow
opened
9 months ago
2
Implement Token Usage Tracking
#27
ZachZimm
closed
7 months ago
5
Memory leak ?
#26
iLoveBug
opened
10 months ago
8
Fix tools template loader (Jinja 2 Template not found)
#25
Blaizzy
closed
10 months ago
0
Implement CLI Client for FastMLX
#24
Blaizzy
opened
10 months ago
0
FastMLX Python Client
#23
Blaizzy
opened
10 months ago
3
Implement role:system in messages
#22
pablo-mano
closed
10 months ago
3
Add support for tool calling
#21
Blaizzy
closed
10 months ago
0
How to make it verbose?
#20
namp
opened
10 months ago
3
Add Docs
#19
Blaizzy
closed
7 months ago
2
Feature Request: Integrate Features from Ollama
#18
evertjr
opened
11 months ago
8
Explore integration with exo?
#17
stewartugelow
opened
11 months ago
4
Documentation link is 404
#16
awni
closed
7 months ago
1
feat: Set `workers` through env variable, improved defaults
#15
SiddhantSadangi
closed
11 months ago
2
Potential error in shutdown if manually cancelled
#14
stewartugelow
closed
11 months ago
6
Microsoft Phi 3 EOS token not recognized
#13
stewartugelow
closed
11 months ago
2
(0.1) Uvicorn running on http://0.0.0.0:8000
#12
stewartugelow
closed
11 months ago
2
Weird image URL bug with Wikimedia
#11
stewartugelow
closed
11 months ago
9
Implement Error Handling for Unsupported Model Types
#10
Blaizzy
opened
11 months ago
0
Implement Model Loading State Tracker
#9
Blaizzy
opened
11 months ago
0
Implement Basic Token Usage Tracking
#8
Blaizzy
opened
11 months ago
1
Add Parallel calls usage
#7
Blaizzy
closed
11 months ago
0
No chat template specified for llava models error
#6
stewartugelow
opened
11 months ago
13
max_tokens not overriding the default
#5
stewartugelow
closed
11 months ago
1
Add support for token streaming, parallel jobs and custom CORS
#4
Blaizzy
closed
11 months ago
0
Documention link is broken
#3
ipfans
closed
7 months ago
1
Cross origin support
#2
digicali
closed
11 months ago
4
Setup FastMLX
#1
Blaizzy
closed
11 months ago
0