Mozilla-Ocho / llamafile

Distribute and run LLMs with a single file.
https://llamafile.ai
Other
18.88k stars 953 forks source link

All Sorts of Issues Executing (WSL and Windows) #356

Open gjnave opened 4 months ago

gjnave commented 4 months ago

Hey guys, So I'm having a difficult time getting certain files t but does not work on wsl when I leave it as a lama file o load. Here's one example, the below file works on windows if I change it to an exe But fails to work when I leave it as a llamafile for WSL. (and

cognibuild@DESKTOP-I6N5JH7:/mnt/e/OneClickLLMs$ chmod +x rocket-3b.Q5_K_M.llamafile.exe rocket-3b.Q5_K_M.llamafile
cognibuild@DESKTOP-I6N5JH7:/mnt/e/OneClickLLMs$ ./rocket-3b.Q5_K_M.llamafile.exe rocket-3b.Q5_K_M.llamafile -ngl 9999
-bash: ./rocket-3b.Q5_K_M.llamafile.exe: Invalid argument

Then theres this, which i can't get to run on either Windows or WSL (with extension properly changed)

Meta-Llama-3-8B-Instruct.Q5_K_M.llamafile.exe
"This App Cant Run on your PC" (Big blue screen)

Any advice is appreciated

hakanai commented 4 months ago

Came here to report a very similar experience.

$ chmod +x Meta-Llama-3-70B-Instruct.Q4_0.llamafile
$ ./Meta-Llama-3-70B-Instruct.Q4_0.llamafile -ngl 9999
./Meta-Llama-3-70B-Instruct.Q4_0.llamafile: Invalid argument

I'm running exactly what the README says to run and it doesn't do the thing. But I had downloaded the original llamafile when it was first released and that version worked fine. What has changed between that release and this one?

Renaming to end in .exe and running directly on Windows instead, and I get this:

image

BindingOx commented 4 months ago

from the README

Unfortunately, Windows users cannot make use of many of these example llamafiles because Windows has a maximum executable file size of 4GB, and all of these examples exceed that size. (The LLaVA llamafile works on Windows because it is 30MB shy of the size limit.) But don't lose heart: llamafile allows you to use external weights; this is described later in this document.

I want to know how to reduce the size to < 4GB

BindingOx commented 4 months ago

This seems to work on windows: remake the llamafile from releases page to .exe

.\llamafile.exe -m "path\to\gguf\file.gguf" -ngl 9999

zanderlewis commented 4 months ago

from the README

Unfortunately, Windows users cannot make use of many of these example llamafiles because Windows has a maximum executable file size of 4GB, and all of these examples exceed that size. (The LLaVA llamafile works on Windows because it is 30MB shy of the size limit.) But don't lose heart: llamafile allows you to use external weights; this is described later in this document.

I want to know how to reduce the size to < 4GB

The Readme says to download the weights separately in order to run the llamafile on windows.

gjnave commented 4 months ago

With Windows it works great.. just unzip the file and you can load it separately with a .bat file.

As for the WSL, the.sh file should run. But it's not

jasonmcaffee commented 4 months ago

Downloading llamafile-0.8.1 from the releases page, then renaming it to have an .exe extension, and using that to run the model worked for me.

It would be nice if the project's readme had similar instructions:

.\llamafile-0.8.1.exe -m "Meta-Llama-3-70B-Instruct.Q4_0.llamafile.exe" --server -ngl 9999

On an RTX 3090, I get 0.5 tokens per second.

InsideZhou commented 4 months ago

Ran into same issue.

./Meta-Llama-3-8B-Instruct.Q5_K_M.llamafile: Invalid argument

when I disabled WIN32 interop feature as follow:

[interop]
enabled=false

got the following message:

<3>WSL (2233) ERROR: UtilAcceptVsock:250: accept4 failed 110
estebanthi commented 3 months ago

Ran into same issue.

./Meta-Llama-3-8B-Instruct.Q5_K_M.llamafile: Invalid argument

when I disabled WIN32 interop feature as follow:

[interop]
enabled=false

got the following message:

<3>WSL (2233) ERROR: UtilAcceptVsock:250: accept4 failed 110

Same here, have you found a fix?

gjnave commented 3 months ago

unfortunately not. I abandoned working with this project for now and have put my attention on KoboldCPP

On Thu, May 16, 2024 at 6:26 AM Esteban Thilliez @.***> wrote:

Ran into same issue.

./Meta-Llama-3-8B-Instruct.Q5_K_M.llamafile: Invalid argument

when I disabled WIN32 interop feature as follow:

[interop] enabled=false

got the following message:

<3>WSL (2233) ERROR: UtilAcceptVsock:250: accept4 failed 110 Same here, have you found a fix? — Reply to this email directly, view it on GitHub , or unsubscribe . You are receiving this because you authored the thread.Message ID: ***@***.***>
Xav-Pe commented 2 months ago

Same error here, with interlop disabled.

 ./llava-v1.5-7b-q4.llamafile
<3>WSL (273) ERROR: UtilAcceptVsock:250: accept4 failed 110
orangewise commented 2 months ago

Same:

./llava-v1.5-7b-q4.llamafile
<3>WSL (667) ERROR: UtilAcceptVsock:250: accept4 failed 110
jart commented 1 month ago
[Unit]
Description=cosmopolitan APE binfmt service
After=wsl-binfmt.service

[Service]
Type=oneshot
ExecStart=/bin/sh -c "echo ':APE:M::MZqFpD::/usr/bin/ape:' >/proc/sys/fs/binfmt_misc/register"

[Install]
WantedBy=multi-user.target

Put this in /etc/systemd/system/cosmo-binfmt.service

Then sudo systemctl enable cosmo-binfmt.

zvan92 commented 4 weeks ago

To fix the "invalid argument" error in WSL, I ran both of these and then tried again, which worked:

sudo sh -c 'echo -1 > /proc/sys/fs/binfmt_misc/WSLInterop sudo sh -c 'echo -1 > /proc/sys/fs/binfmt_misc/WSLInterop-late