keldenl / gpt-llama.cpp

A llama.cpp drop-in replacement for OpenAI's GPT endpoints, allowing GPT-powered apps to run off local llama.cpp models instead of OpenAI.
MIT License
594 stars 66 forks source link

trouble generating a response #28

Closed gsgoldma closed 1 year ago

gsgoldma commented 1 year ago

I've taken steps to authorize the model by pasting the local address to the model and I'm running chatbotui with the env.local set to the local model. It's connecting, but as soon as I try to generate a response, I get this error and it exits the program:

Microsoft Windows [Version 10.0.19044.2846] (c) Microsoft Corporation. All rights reserved.

D:\gpt-llama.cpp>npm start

gpt-llama.cpp@0.2.2 start node index.js

Server is listening on:

See Docs

Test your installation

See https://github.com/keldenl/gpt-llama.cpp#usage for more guidance.

REQUEST RECEIVED PROCESSING NEXT REQUEST FOR /v1/models PROCESS COMPLETE REQUEST RECEIVED PROCESSING NEXT REQUEST FOR /v1/models REQUEST RECEIVED PROCESSING NEXT REQUEST FOR /v1/chat/completions

===== CHAT COMPLETION REQUEST =====

===== LLAMA.CPP SPAWNED ===== 1i6Ld\llama.cpp\main -m sk- --temp 1 --n_predict 1000 --top_p 0.1 --top_k 40 -b 512 -c 2048 --repeat_penalty 1.1764705882352942 --reverse-prompt user: --reverse-prompt user --reverse-prompt system: --reverse-prompt system --reverse-prompt ## --reverse-prompt

--reverse-prompt ### -i -p ### Instructions

Complete the following chat conversation between the user and the assistant. System messages should be strictly followed as additional instructions.

Inputs

system: You are a helpful assistant. user: How are you? assistant: Hi, how may I help you today? system: You are ChatGPT, a large language model trained by OpenAI. Follow the user's instructions carefully. Respond using markdown. user: prompt 1 user: hi

Response

user: hjghj assistant:

===== REQUEST ===== user: hjghj node:events:491 throw er; // Unhandled 'error' event ^

Error: spawn sk-\llama.cpp\main ENOENT at ChildProcess._handle.onexit (node:internal/child_process:283:19) at onErrorNT (node:internal/child_process:476:16) at process.processTicksAndRejections (node:internal/process/task_queues:82:21) Emitted 'error' event on ChildProcess instance at: at ChildProcess._handle.onexit (node:internal/child_process:289:12) at onErrorNT (node:internal/child_process:476:16) at process.processTicksAndRejections (node:internal/process/task_queues:82:21) { errno: -4058, code: 'ENOENT', syscall: 'spawn sk-\llama.cpp\main', path: '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\llama.cpp\main', spawnargs: [ '-m', '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', '--temp', 1, '--n_predict', 1000, '--top_p', '0.1', '--top_k', '40', '-b', '512', '-c', '2048', '--repeat_penalty', '1.1764705882352942', '--reverse-prompt', 'user:', '--reverse-prompt', '\nuser', '--reverse-prompt', 'system:', '--reverse-prompt', '\nsystem', '--reverse-prompt', '##', '--reverse-prompt', '\n##', '--reverse-prompt', '###', '-i', '-p', '### Instructions\n' + 'Complete the following chat conversation between the user and the assistant. System messages should be strictly followed as additional instructions.\n' + '\n' + '### Inputs\n' + 'system: You are a helpful assistant.\n' + 'user: How are you?\n' + 'assistant: Hi, how may I help you today?\n' + "system: You are ChatGPT, a large language model trained by OpenAI. Follow the user's instructions carefully. Respond using markdown.\n" + 'user: prompt 1\n' + 'user: hi\n' + '\n' + '### Response\n' + 'user: hjghj\n' + 'assistant:' ] }

Node.js v18.13.0

D:\gpt-llama.cpp>

gsgoldma commented 1 year ago

It seems to trying to spawn some API key sk- making my PATH sk-/lllama/main instead of my d: drive. Not sure how this is happening, and I'm not sure where to edit this to fix it.

gsgoldma commented 1 year ago

It was because I had way earlier set my open ai key in the env variables in adv settings! it was taking from that everytime.