issues
search
Mozilla-Ocho
/
llamafile
Distribute and run LLMs with a single file.
https://llamafile.ai
Other
16.75k
stars
830
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
llamafile as LLM server for Mantella mod and Skyrim, is working nice but there is a little problem.
#415
amonpaike
opened
1 month ago
6
Are embeddings not supported with the mistral-7b-instruct-v0.2 model?
#414
norteo
closed
1 month ago
4
Illegal Instruction when running a llamafile
#413
cdamiens
closed
1 month ago
7
Added Script To Upgrade llamafile Archives
#412
mofosyne
closed
1 month ago
2
Porting "Inplace Upgrading Of Llamafiles Engine Bash Script" to llamafile for general usage
#411
mofosyne
closed
1 month ago
0
Out of spec `/completions` endpoint
#410
yourbuddyconner
closed
1 month ago
1
Would it be possible to support `n_probs` / `logprobs` in chat completion API?
#409
cbowdon
opened
1 month ago
0
Follow-up answers are slow
#408
woheller69
opened
1 month ago
1
Update README.md
#407
matthewbcool
closed
1 month ago
1
Compilation error with ROCM: fatal error: 'cmath' file not found
#406
huangjs
closed
1 month ago
2
Faster AVX2 matrix multiplications for lgacy quants
#405
ikawrakow
closed
1 month ago
3
Segfault from /embedding endpoint
#404
k8si
closed
1 month ago
0
GGML_ASSERT: ggml-cuda.cu:9198: !"CUDA error"
#403
laooopooo
closed
1 month ago
5
More LLava style models: https://github.com/Meituan-AutoML/MobileVLM and https://github.com/vikhyat/moondream
#402
kinchahoy
opened
1 month ago
2
Update README.md
#401
h2oicsaba
closed
1 month ago
2
Signal SIGILL (Illegal Instructions)
#400
DjagbleyEmmanuel
closed
1 month ago
0
Failed to load model, here's what came out
#399
DjagbleyEmmanuel
closed
1 month ago
0
Integration with Open WebUI
#398
ChristianWeyer
opened
1 month ago
0
Fails to load custom UI on Apple Silicon (M1 Pro) - Shows "File not found" on localhost:8080
#397
towardmay
opened
1 month ago
0
ERROR: UtilGetPpid:1293: Failed to parse
#396
MicahZoltu
opened
1 month ago
2
Template import/export and defaults
#395
bannsec
opened
1 month ago
1
Faster AVX2 prompt processing for k-quants and IQ4_XS
#394
ikawrakow
closed
1 month ago
4
server: multimodal - fix misreported prompt and num prompt tokens
#392
cjpais
closed
1 month ago
0
Unexpected output from server.cpp `/embedding` endpoint
#391
k8si
closed
1 month ago
1
unable run in mac
#390
fxcl
closed
1 month ago
0
example of commandline chat
#389
magdesign
closed
1 month ago
1
Feature Request: Option to specify base URL for server mode
#388
vlasky
opened
2 months ago
2
chatml/cml commandline option rejected
#387
vlasky
opened
2 months ago
1
GPU offloading not working on system with AMD 5900HX CPU
#386
vlasky
opened
2 months ago
6
Feature Request: Apple Silicone Neural Engine - Core ML model package format support
#385
qdrddr
opened
2 months ago
1
GPU offloading doesn't seem to be working
#384
v4u6h4n
opened
2 months ago
9
Uncaught SIGSEGV (SEGV_1722) at 0x7ffcc58c53ac
#383
gaust
opened
2 months ago
0
If the number of words in the answer exceeds the limit, an infinite error will be reported.
#382
abpyu
opened
2 months ago
0
Running llamafile on 2 GPU's instead of 1
#381
TheAmpPlayer
opened
2 months ago
0
windows do not run
#380
zxc524580210
opened
2 months ago
2
AgentTesla!ml detected on TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile
#379
jnes12
opened
2 months ago
0
Uncaught SIGSEGV (SEGV_MAPERR) with Meta-Llama-3-8B-Instruct.Q2_K
#378
Lathanao
opened
2 months ago
0
Terminated by signal SIGILL (Illegal instruction)
#377
DjagbleyEmmanuel
opened
2 months ago
1
newbie edit: clarity on 'man'
#376
matthewbcool
closed
1 month ago
3
Follow up to the offload-arch-fix: qsort and static linkage
#375
rasmith
closed
2 months ago
0
Uncaught SIGSEGV (SEGV_1722) at 0x7ffd45a1cf19
#374
knowfoot
opened
2 months ago
3
Linux: File does not contain a valid CIL image
#373
jeancf
closed
2 months ago
3
CudaMalloc failed: out of memory with TinyLlama-1.1B
#372
Lathanao
opened
2 months ago
4
Llama 3 chat template
#371
woheller69
opened
2 months ago
1
run-detectors: unable to find an interpreter for ./Meta-Llama-3-8B-Instruct.Q6_K.llamafile
#369
superkuh
closed
2 months ago
0
Fix get_amd_offload_arch_flag so it will match offload-arch types having alphanumeric names.
#368
rasmith
closed
2 months ago
1
link_cuda_dso: warning: dlopen() isn't supported on this platform: failed to load library
#365
vlawhern
closed
2 months ago
2
Got a trojan warning from windows defender when running phi-2.Q5_K_M.llamafile
#364
ugthefluffster
closed
2 months ago
2
Question
#363
fakerybakery
closed
2 months ago
3
Run as deamon / background process
#362
JLopeDeB
opened
2 months ago
1
Previous
Next