issues
search
Mozilla-Ocho
/
llamafile
Distribute and run LLMs with a single file.
https://llamafile.ai
Other
20.57k
stars
1.04k
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
add stable-diffusion.cpp to install target (fix #580)
#635
rgroesslinger
opened
23 hours ago
0
Bug: Unable to integrate the completion endpoint with Cline [Error: type must be string, but is array]
#631
ocrumbs
opened
6 days ago
2
Bug: Why llamafile don't remove end token like <|eot_id|> or <end_of_turn>?
#630
jeezrick
opened
1 week ago
0
Feature Request: Support AVX-512 for Intel Rocket Lake
#628
aluklon
opened
1 week ago
0
تسليك المجاري بالضغط
#627
sewerageservices
closed
1 week ago
0
معالجة الروائح الكريهة
#626
sewerageservices
closed
1 week ago
0
خدمة تنظيف بالوعات
#625
sewerageservices
closed
1 week ago
0
شفط مجاري الصرف الصحي
#624
sewerageservices
closed
1 week ago
0
خدمة صرف الصحي
#623
sewerageservices
closed
1 week ago
0
تنظيف البيارات
#622
sewerageservices
closed
1 week ago
0
شركة النهر لخدمات العوازل
#621
sewerageservices
closed
1 week ago
0
شركة كشف تسربات المياه
#620
sewerageservices
closed
1 week ago
0
شركة تسليك مجاري في الشارقة
#619
sewerageservices
closed
1 week ago
0
شركة تسليك مجاري في راس الخيمة
#618
sewerageservices
closed
1 week ago
0
النهر لخدمات الصرف الصحي في الامارات
#617
sewerageservices
closed
1 week ago
0
شركة تسليك مجاري في دبي
#616
sewerageservices
closed
1 week ago
0
Bug: Cannot use llama 3.2 vision
#615
GeorgelPreput
closed
6 days ago
3
Add kitops as a way to run llamafile
#613
gorkem
opened
1 week ago
0
Bug: Shared memory not working, results in Segfault
#611
abishekmuthian
opened
2 weeks ago
1
Bug: segfault loading models with KV quantization and related problems
#610
mseri
opened
2 weeks ago
0
Bug: GPU Acceleration works for one but not the other users on same Linux machine
#609
lovenemesis
opened
2 weeks ago
0
Granite three support
#608
gabe-l-hart
opened
2 weeks ago
4
[llamafiler] doc/v1_chat_completions.md: remove duplicate entry
#607
mseri
opened
2 weeks ago
0
Support for configurable URL prefix in llamafiler
#604
vlasky
closed
3 weeks ago
1
Bug: imatrix quant gguf models (e.g. IQ3_XS, IQ2_M) not using NV GPU properly with `llamafile-0.8.14`
#603
wingenlit
closed
3 weeks ago
1
Bug: 'cmath' file not found on Windows with AMD Dedicated GPU
#601
DK013
opened
4 weeks ago
0
Feature Request: CORS fallback for OpenAI API compatible endpoints
#600
DK013
opened
4 weeks ago
0
Bug: Whisperfile: can't turn off translation
#599
hheexx
opened
4 weeks ago
2
Bug: Can't load Llama-3.2-3B-Instruct-f16, i.e. full weights without quantization on CUDA (GTX1070)
#598
SergeyDidenko
closed
4 weeks ago
3
Support for configurable URL prefix when running in server mode
#597
vlasky
closed
1 month ago
0
feat: setup script for llamafile
#593
hemanth
opened
1 month ago
3
Bug: mlock is failing and llama-server is outdated for very long time.
#591
BoQsc
closed
1 month ago
3
Bug: llamafiler /v1/embeddings endpoint does not return model name
#589
wirthual
opened
1 month ago
0
Bug: `--path` Option Broken When Pointing to a Folder
#588
gorkem
opened
1 month ago
0
Bug: binary called ape in PATH breaks everything
#587
step21
opened
1 month ago
0
add llama_matmul_demo2_bf16.c with other parallelize experiment
#586
Djip007
opened
1 month ago
2
Update WSL troubleshooting in README.md
#585
halter73
opened
1 month ago
0
Bug: Phi3.5-mini-instruct Q4 K L gguf based llamafile CuDA error AMD iGPU
#584
eddan168
opened
1 month ago
0
Feature Request: /v1/models endpoint for further openai api compatibility
#583
quantumalchemy
opened
1 month ago
1
Bug: unknown argument --rope-scale
#582
charleswg
opened
1 month ago
1
Enable GPU support in llama-bench
#581
cjpais
closed
1 month ago
0
Bug: install: cannot stat 'o/x86_64/stable-diffusion.cpp/main': No such file or directory
#580
toby3d
opened
1 month ago
0
Bug: APE is running on WIN32 inside WSL - whisperfile - zsh
#579
baptistecs
opened
1 month ago
1
Bug: Huge difference between prompt processing (tokens/sec) compared to Llama cpp or Ollama
#577
mathav95raj
closed
1 month ago
7
Google
#576
istprivatsache
closed
1 month ago
0
Update README: Add llama 3.2 to model table
#575
wirthual
closed
2 weeks ago
2
/embedding API
#571
invisiblepancake
opened
1 month ago
0
No additional speed up after installing oneMKL
#570
invisiblepancake
opened
1 month ago
0
(For "history": bench with llamafile V0.8.6)
#569
invisiblepancake
opened
1 month ago
0
whisperfile server: convert files without ffmpeg
#568
cjpais
closed
1 month ago
0
Next