cocktailpeanut / dalai

The simplest way to run LLaMA on your local machine
https://cocktailpeanut.github.io/dalai
13.09k stars 1.43k forks source link

Error installing llama using docker compose (logs attached) #463

Open umrashrf opened 11 months ago

umrashrf commented 11 months ago

Logs: https://pastebin.com/raw/V2PYd9Hw

node:events:492
      throw er; // Unhandled 'error' event
      ^

Error: EIO: i/o error, write
Emitted 'error' event on WriteStream instance at:
    at WriteStream.onerror (node:internal/streams/readable:785:14)
    at WriteStream.emit (node:events:514:28)
    at emitErrorNT (node:internal/streams/destroy:151:8)
    at emitErrorCloseNT (node:internal/streams/destroy:116:3)
    at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
  errno: -5,
  code: 'EIO',
  syscall: 'write'
}

Node.js v18.17.0
mirek190 commented 11 months ago

Stop using that ancient dead project and go to llamacpp or koboldcpp ... Also download models from https://huggingface.co/TheBloke ggml versions