First, thank you for your work and all your improvments.
I try to run llamafiler with mxbai-embed-large-v1-f16.gguf model provided here mixedbread-ai/mxbai-embed-large-v1.
The same model work well with the old llamafile server method :
/usr/bin/llamafile --server --nobrowser --embedding --host 0.0.0.0 --port 9000 -m /opt/model/mxbai-embed-large-v1-f16.gguf
Version
llamafile v0.8.12
What operating system are you seeing the problem on?
Contact Details
elgarehb@gmail.com
What happened?
First, thank you for your work and all your improvments. I try to run llamafiler with mxbai-embed-large-v1-f16.gguf model provided here mixedbread-ai/mxbai-embed-large-v1.
Here, the command executed :
./llamafile-0.8.12/bin/llamafiler -m /opt/model/mxbai-embed-large-v1-f16.gguf -l 0.0.0.0:9000
But, after some requests, the server crash and it look like stuck :
The same model work well with the old llamafile server method :
/usr/bin/llamafile --server --nobrowser --embedding --host 0.0.0.0 --port 9000 -m /opt/model/mxbai-embed-large-v1-f16.gguf
Version
llamafile v0.8.12
What operating system are you seeing the problem on?
Linux
Relevant log output