jart / emacs-copilot

Large language model code completion for Emacs
Apache License 2.0
698 stars 20 forks source link

Doing vfork: Exec format error #10

Open erwagasore opened 7 months ago

erwagasore commented 7 months ago

I am using macOS M2 and I have followed the installation process. But I am facing an error when attempted to run the command copilot-complete with keybinding C-c C-k. The screenshots bellow are highlighting the issue

I tried to read on Hacker News but since I am very inexperience with regard to llamafiles I would appreciate to have a step by step guide to fix this issue. Thanks

zzhjerry commented 7 months ago

having the same issue on MacOS M1 using wizardcoder-python-34b-v1.0.Q5_K_M.llamafile

locutus3009 commented 6 months ago

Change binary name to "sh" and use llama file as a parameter to a program

erwagasore commented 6 months ago

@locutus3009 I am not that knowledgable on this. Can you please elaborate on this. Maybe with an example. Thank you

feldrick commented 4 months ago

@erwagasore Here's how to do the change that @locutus3009 is proposing. Change defcustom copilot-bin in the buffer that contains the code to run eval-buffer on to:

(defcustom copilot-bin
  "sh"
  "Path of llamafile executable with LLM weights."
  :type 'string
  :group 'copilot) 

and in the same buffer, change the call-process block to

      (call-process copilot-bin nil (list (current-buffer) nil) t
                    "wizardcoder-python-34b-v1.0.Q3_K_M.llamafile"
                    "--prompt-cache" cash
                    "--prompt-cache-all"
                    "--silent-prompt"
                    "--temp" "0"
                    "-c" "1024"
                    "-ngl" "35"
                    "-r" "```"
                    "-r" "\n}"
                    "-f" hist))

However, I still can't get this to work even when using sh and ape (as explained by @jart here) to execute the llamafile. When I call the copilot-complete function in a code buffer, emacs spins for a while (like it's loading up the llamafile), but then outputs nothing.

I'm on M3 Silicon, using zsh 5.9, and emacs 29.1.

spilornis commented 1 month ago
(defcustom copilot-bin
  "sh"
  "Path of llamafile executable with LLM weights."
  :type 'string
  :group 'copilot) 

and in the same buffer, change the call-process block to

      (call-process copilot-bin nil (list (current-buffer) nil) t
                    "wizardcoder-python-34b-v1.0.Q3_K_M.llamafile"
                    "--prompt-cache" cash
                    "--prompt-cache-all"
                    "--silent-prompt"
                    "--temp" "0"
                    "-c" "1024"
                    "-ngl" "35"
                    "-r" "```"
                    "-r" "\n}"
                    "-f" hist))

However, I still can't get this to work even when using sh and ape (as explained by @jart here) to execute the llamafile. When I call the copilot-complete function in a code buffer, emacs spins for a while (like it's loading up the llamafile), but then outputs nothing.

I had to add-c as first argument followed by the llamafile executable as the next one. However, the llamafile ran as a server instead of returning the completions in the buffer.

govi218 commented 5 days ago

@feldrick I have a similar outcome, but instead it OOMs even with the 13b model; I'm on an Mac Pro M3 as well

edit: nvm 13b works for me with ape! No luck with anything larger