issues
search
arcee-ai
/
fastmlx
FastMLX is a high performance production ready API to host MLX models.
Other
222
stars
25
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Tool parsing and tool choice
#40
cmcmaster1
opened
5 days ago
4
Add Qwen function calling
#39
cmcmaster1
opened
6 days ago
2
Fixed default prompt template in config.json
#38
cmcmaster1
closed
6 days ago
0
Fix Docs Link
#37
Blaizzy
opened
3 weeks ago
0
Fix list models tests
#36
Blaizzy
opened
3 weeks ago
0
OpenWebUI doesn't connect to FastMLX
#35
SwagMuffinMcYoloPants
opened
1 month ago
5
/v1/models - format output to match OpenAI styling
#34
aricshow
closed
3 weeks ago
4
ReadMe Broken Docs Link
#33
larson-carter
closed
4 weeks ago
3
feat: open webui compliance
#32
viljark
closed
2 months ago
3
Future: add image generation prompts?
#31
stewartugelow
opened
3 months ago
1
Using OpenAI API compliant to support vision models
#30
madroidmaq
closed
2 months ago
5
Multiple workers do not share memory, which causes a full model reload for each message generation.
#29
ZachZimm
opened
3 months ago
5
Integrate mlx-hub like functionality?
#28
stewartugelow
opened
3 months ago
2
Implement Token Usage Tracking
#27
ZachZimm
closed
3 weeks ago
5
Memory leak ?
#26
iLoveBug
opened
3 months ago
7
Fix tools template loader (Jinja 2 Template not found)
#25
Blaizzy
closed
3 months ago
0
Implement CLI Client for FastMLX
#24
Blaizzy
opened
3 months ago
0
FastMLX Python Client
#23
Blaizzy
opened
3 months ago
3
Implement role:system in messages
#22
pablo-mano
closed
3 months ago
3
Add support for tool calling
#21
Blaizzy
closed
3 months ago
0
How to make it verbose?
#20
namp
opened
4 months ago
3
Add Docs
#19
Blaizzy
closed
4 weeks ago
2
Feature Request: Integrate Features from Ollama
#18
evertjr
opened
4 months ago
8
Explore integration with exo?
#17
stewartugelow
opened
4 months ago
4
Documentation link is 404
#16
awni
closed
4 weeks ago
1
feat: Set `workers` through env variable, improved defaults
#15
SiddhantSadangi
closed
4 months ago
2
Potential error in shutdown if manually cancelled
#14
stewartugelow
closed
4 months ago
6
Microsoft Phi 3 EOS token not recognized
#13
stewartugelow
closed
4 months ago
2
(0.1) Uvicorn running on http://0.0.0.0:8000
#12
stewartugelow
closed
4 months ago
2
Weird image URL bug with Wikimedia
#11
stewartugelow
closed
4 months ago
9
Implement Error Handling for Unsupported Model Types
#10
Blaizzy
opened
4 months ago
0
Implement Model Loading State Tracker
#9
Blaizzy
opened
4 months ago
0
Implement Basic Token Usage Tracking
#8
Blaizzy
opened
4 months ago
1
Add Parallel calls usage
#7
Blaizzy
closed
4 months ago
0
No chat template specified for llava models error
#6
stewartugelow
opened
4 months ago
13
max_tokens not overriding the default
#5
stewartugelow
closed
4 months ago
1
Add support for token streaming, parallel jobs and custom CORS
#4
Blaizzy
closed
4 months ago
0
Documention link is broken
#3
ipfans
closed
4 weeks ago
1
Cross origin support
#2
digicali
closed
4 months ago
4
Setup FastMLX
#1
Blaizzy
closed
4 months ago
0