issues
search
Mozilla-Ocho
/
llamafile
Distribute and run LLMs with a single file.
https://llamafile.ai
Other
20.58k
stars
1.04k
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Feature Request: document change to default context window in 0.8.13
#567
cbowdon
opened
1 month ago
0
Bug: Unable to specify seed from cli
#566
mjpowersjr
opened
2 months ago
3
Bug: SIGABRT on macOS 14.7 (23H124)
#565
Omoeba
opened
2 months ago
0
Bug: Segmentation fault re-running after installing NVIDIA CUDA.
#560
4kbyte
opened
2 months ago
0
Segfault in WIN32 API upon OOM
#558
safeswap
closed
2 months ago
5
Bug: gzip: stdin: unexpected end of file
#557
bphd
closed
2 months ago
2
Bug: Failed to read audio file
#556
misters2008
opened
2 months ago
4
How to integrate react/vue files into llamafile for one file llm+application distribution?
#555
hololeo
closed
2 months ago
1
Bug: Memory allocation on windows still fails on some models
#554
DK013
closed
2 months ago
2
Bug: ERROR: whisperfile --server
#553
DK013
closed
3 months ago
1
Quantize TriLM models using Q2_K_S
#552
ikawrakow
closed
3 months ago
1
Feature Request: Add support for Raspberry Pi Ai Kit
#548
beingminimal
opened
3 months ago
5
Bug: `ggml-rocm.so not found` in llamafile 0.8.13
#547
winstonma
opened
3 months ago
1
Bug: whisperfile: The --help doesn't match the available options
#546
adamroyjones
closed
3 months ago
1
Bug: whisperfile --help doesn't mention --no-prints
#544
simonw
closed
3 months ago
5
Bug: llamafile completions API hangs on win10
#542
Roeya
closed
2 months ago
3
Do you have an uncensored LLM as llamafile?
#541
bitcoinmeetups
closed
2 months ago
3
Bug: Uncaught SIGABRT (SI_0) with MiniCPM
#540
felix-schultz
closed
3 months ago
2
Bug: ILL_ILLOPN when trying to run bartowski/DeepSeek-V2-Chat-0628-GGUF
#538
ELigoP
closed
2 months ago
1
Bug: malloc: *** error for object 0x600003310600: pointer being freed was not allocated
#537
groovecoder
opened
3 months ago
1
update GGML_HIP_UMA
#536
Djip007
opened
3 months ago
2
Update BUILD.mk
#535
Okohedeki
closed
3 months ago
0
Fix GPU Layer Limitation in llamafile
#534
BIGPPWONG
closed
2 weeks ago
1
Bug: The token generation speed is slower compared to the upstream llama.cpp project
#533
BIGPPWONG
closed
1 week ago
0
Bug: unknown argument: --threads‐batch‐draft
#532
moisestohias
opened
3 months ago
0
Bug: -ngl doesn't work when running as a systemd service
#531
takelley1
closed
3 months ago
4
o//whisper.cpp/main: No such file or directory
#530
sujoykb
closed
3 months ago
3
two unconditional stray printfs in llamafile/cuda.c
#526
leighklotz
closed
2 months ago
1
Bug: llamafile-bench signal SIGILL, Illegal instruction.
#525
Djip007
closed
3 months ago
3
Adding vision support to api_like_OAI
#524
aittalam
closed
1 month ago
0
Update readme to note that llamafiles can be run as weights
#523
mofosyne
opened
3 months ago
2
Bug: Llamafiler SIGSEGV crash
#521
BorisELG
closed
3 months ago
3
Feature Request: Add Gemma 2 2B
#519
metaskills
closed
3 months ago
3
Add whisper.cpp (server) support to llamafile
#517
cjpais
closed
3 months ago
3
Bug: llama 3.1 and variants fail with error "wrong number of tensors; expected 292, got 291"
#516
camAtGitHub
opened
3 months ago
10
Feature Request: Support for microsoft/Phi-3-vision-128k-instruct
#515
azhuvath
opened
3 months ago
0
Bug: llamafile do't Load
#514
aifeifei798
closed
3 months ago
1
Bug: run-detectors: unable to find an interpreter for ./llava-v1.5-7b-q4.llamafile
#513
wgong
closed
3 months ago
2
Bug: Unable to load Mixtral-8x7B-Instruct-v0.1-GGUF on Amazon Linux with AMD EPYC 7R13
#512
rpchastain
opened
3 months ago
4
UX Request: Update readme to mention `llamafile -m foo.llamafile` as an option
#511
mofosyne
opened
3 months ago
1
Feature Request: Add support for GLM4-9B and related models
#509
VarLad
opened
3 months ago
0
Bug:
#506
tonychan09
closed
3 months ago
1
Bug: Mixtral 8x7B fails to return a response after a couple of API calls whill running on AWS g6.12xlarge EC2 instance
#504
rpchastain
opened
4 months ago
0
Bug: low CPU usage on AWS Graviton4 compared to ollama
#503
nlothian
opened
4 months ago
0
Bug: NUMA support on Windows
#502
BernoulliBox
opened
4 months ago
0
CPU memory alloc on Windows sometimes fails
#501
Roeya
closed
3 months ago
9
Bug: Not starting in windows
#500
Chris2L
closed
4 months ago
14
Bug: unsupported op 'MUL_MAT' on bf16 but not f16 on SmolLM
#499
Stillerman
closed
4 months ago
1
Bug: Unable to allocate memory for image embeddings
#496
chattylab
closed
4 months ago
0
Supports SmolLM
#495
Stillerman
closed
4 months ago
6
Previous
Next