keldenl / gpt-llama.cpp

A llama.cpp drop-in replacement for OpenAI's GPT endpoints, allowing GPT-powered apps to run off local llama.cpp models instead of OpenAI.
MIT License
590 stars 65 forks source link

weird headers error in chatcompletion mode #38

Open OracleToes opened 1 year ago

OracleToes commented 1 year ago

Request DONE

Request DONE node:internal/errors:490 ErrorCaptureStackTrace(err); ^

Error [ERR_HTTP_HEADERS_SENT]: Cannot set headers after they are sent to the client at new NodeError (node:internal/errors:399:5) at ServerResponse.setHeader (node:_http_outgoing:649:11) at ServerResponse.header (/home/momiro/jane-gpt-llama/node_modules/express/lib/response.js:794:10) at ServerResponse.send (/home/momiro/jane-gpt-llama/node_modules/express/lib/response.js:174:12) at ServerResponse.json (/home/momiro/jane-gpt-llama/node_modules/express/lib/response.js:278:15) at Object.write (file:///home/momiro/jane-gpt-llama/routes/chatRoutes.js:308:8) at ensureIsPromise (node:internal/webstreams/util:182:19) at writableStreamDefaultControllerProcessWrite (node:internal/webstreams/writablestream:1115:5) at writableStreamDefaultControllerAdvanceQueueIfNeeded (node:internal/webstreams/writablestream:1230:5) at writableStreamDefaultControllerWrite (node:internal/webstreams/writablestream:1104:3) { code: 'ERR_HTTP_HEADERS_SENT' }

I am using the /chat feature, maintaining a conversation, usually somewhere between 5 and 10 messages in, an error like this can happen. Doesn't always happen though, sometimes llama.cpp just locks up instead.

keldenl commented 1 year ago

I think the issue was the lack of res.end() and a race condition. I just merged https://github.com/keldenl/gpt-llama.cpp/pull/39/files, which should solve the issue. Can you try pulling @OracleToes and see if it's still happening?