pgibler / cmdh

Create Linux commands from natural language, in the shell.
MIT License
79 stars 7 forks source link

`TypeError: generateFunction is not a function` for codellama model #16

Open YoungPhlo opened 3 months ago

YoungPhlo commented 3 months ago

macOS 14.1 23B2073

node -v v20.12.0

npm -v 10.5.0

tsc -v Version 5.4.3

I run the installer command: git clone https://github.com/pgibler/cmdh.git && cd cmdh && ./install.sh

Then use the configuration wizard to select:

Note: I noticed the installation script adds the alias to .bashrc even though I'm on MacOS with zsh and the .bashrc didn't exist prior to running the script.

I moved the alias to .zshrc and ran source ~/.zshrc

Running following command... cmdh 'Output available hard drive space'

Gives this error:

file:///Users/dev/code/cmdh/dist/api/api.js:15
        const promise = generateFunction(prompt, system);
                        ^

TypeError: generateFunction is not a function
    at startChat (file:///Users/dev/code/cmdh/dist/api/api.js:15:25)
    at handlePrompt (file:///Users/dev/code/cmdh/dist/run.js:67:32)
    at async run (file:///Users/dev/code/cmdh/dist/run.js:40:13)
    at async main (file:///Users/dev/code/cmdh/dist/index.js:7:5)

If I run the command with the llama2 or mistral models configured it works

From what I can tell it's just the latest codellama 7b with these specs:

model family llama · parameters 7B · quantization 4-bit 3.8GB

swoh816 commented 3 months ago

I don't have the error message with codellama on my machine. I had the same error message as yours when I didn't configure cmdh properly, you might want to reconfigure it by doing cmdh -> Manage configuration -> ollama -> codellama?

If you still see the same error message, try downgrading node version by to something like v18. Mine is v18.19.1, and I had issues with cmdh when I used v22, which is not your case though.

I wonder if any of the above solutions works for you.

YoungPhlo commented 3 months ago
dev@Macbook-Pro: ~ % nvm install 18
Downloading and installing node v18.20.0...
Downloading https://nodejs.org/dist/v18.20.0/node-v18.20.0-darwin-arm64.tar.xz...
############################################################################################################################################### 100.0%
Computing checksum with shasum -a 256
Checksums matched!
Now using node v18.20.0 (npm v10.5.0)
dev@Macbook-Pro: ~ % nvm use 18
Now using node v18.20.0 (npm v10.5.0)
dev@Macbook-Pro: ~ % cmdh
? Select  Manage configuration
? Select  Modify
? Which LLM host do you want to use? ollama
Configure the ollama URL and model to use.
? Enter the model name: codellama
? Enter the ollama URL: http://localhost:11434
Configuration updated.
dev@Macbook-Pro: ~ % cmdh 'Output available hard drive space'
✖ Retrieving command... Output available hard drive space
node:internal/deps/undici/undici:11731
    Error.captureStackTrace(err, this);
          ^

TypeError: fetch failed
    at Object.fetch (node:internal/deps/undici/undici:11731:11)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
  cause: Error: connect ECONNREFUSED ::1:11434
      at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1555:16) {
    errno: -61,
    code: 'ECONNREFUSED',
    syscall: 'connect',
    address: '::1',
    port: 11434
  }
}

Node.js v18.20.0

I ran ollama serve in another terminal.

Not sure where I'm going wrong here. Luckily llama2 and mistral work well using cmdh so this isn't a big deal.

swoh816 commented 3 months ago

It's very strange, it looks like it has either connection issue, or a parsing issue. Parsing issue can occur while cmdh is parsing the Ollama's output, some special characters like ` or ```, or escape characters can get in the way, which ends up terminating the Ollama's output where it shouldn't. Very common issue for open-source projects with LLMs, this issue in cmdh is also relevant. If that's the issue, the source code must be fixed.

I doubt it's the issue of the version of Ollama, because cmdh hasn't been updated for some time, it should work with older versions of Ollama.

Anyway, it should be annoying you can't use codellama :p Please share with us if you come up with a solution!

geraveoyomama commented 3 months ago

To add onto this:

user@machine:~$ cmdh "what the dog doing"
✖ Retrieving command... what the dog doing
node:internal/deps/undici/undici:12430
    Error.captureStackTrace(err, this);
          ^

TypeError: fetch failed
    at node:internal/deps/undici/undici:12430:11
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
  cause: Error: connect ECONNREFUSED ::1:11434
      at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1549:16) {
    errno: -111,
    code: 'ECONNREFUSED',
    syscall: 'connect',
    address: '::1',
    port: 11434
  }
}

Node.js v18.19.1

Before I had a similar issue as YoungPhlo. I redid cmdh configure and got this error.

swoh816 commented 2 months ago

I found out that many of the recently created issues (including https://github.com/pgibler/cmdh/issues/14) occur when you configure cmdh with Ollama. Probably cmdh's codes for Ollama need to be fixed.