openchatai / OpenChat

LLMs custom-chatbots console ⚡
https://open.cx
MIT License
5.15k stars 642 forks source link

"error sending the message." (404 in LLM-server) #139

Closed Jchang4 closed 6 months ago

Jchang4 commented 1 year ago

Hey I installed the repo and ran make install. the website loads fine and I can load a website as a knowledge source. But when I test out the bot and ask it a question I get "error sending the message."

do you know what's up? I just followed the installation guide using pinecone =\

codebanesr commented 1 year ago

Could you please review the logs for the "llm-server" service? You can do this by using the command: docker-compose logs -f llm-server. There's a possibility that there might be a missing environment variable. Additionally, ensure that you are using a paid version of Pinecone, as the chatbot requires the ability to create a namespace, which is not available in the Pinecone free trial.

Kindly share the logs with us; this will enable us to provide you with a more conclusive response.

spencerwongfeilong commented 1 year ago

I am running an application (OpenChat) on an Ubuntu 20.04 Virtual Machine (VM) that resides on a QNAP NAS with the IP address 192.168.0.138. I've installed this application using Docker and it is accessible via port 8000.

When I access the application directly from within the Ubuntu VM - either via localhost or through the VM's local IP address 192.168.0.138 - everything works as expected and I can successfully send queries to the Internet.

However, when I attempt to access the application from another device on my network, with the IP 192.168.0.100, I run into problems. While I can reach the application's interface, any attempts to send queries to the internet through the application fail.

I have checked the firewall settings on the Ubuntu VM and found no rules that should prevent or restrict this traffic. Reviewing the iptables rules reveals that Docker appears to be correctly routing traffic to port 8000.

Given these findings, it seems there could be some issues with network configurations, container-to-host port mappings, or possibly within the application itself that are preventing successful query operations when accessed from devices outside of the host VM.

Could you enlighten me how i should move forward? thank you!

Jchang4 commented 1 year ago

@codebanesr

error Error: Request failed with status code 404
    at createError (/usr/src/app/node_modules/openai/node_modules/axios/lib/core/createError.js:16:15)
    at settle (/usr/src/app/node_modules/openai/node_modules/axios/lib/core/settle.js:17:12)
    at IncomingMessage.handleStreamEnd (/usr/src/app/node_modules/openai/node_modules/axios/lib/adapters/http.js:322:11)
    at IncomingMessage.emit (node:events:526:35)
    at endReadableNT (node:internal/streams/readable:1359:12)
    at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
  config: {
    transitional: {
      silentJSONParsing: true,
      forcedJSONParsing: true,
      clarifyTimeoutError: false
    },
    adapter: [Function: httpAdapter],
    transformRequest: [ [Function: transformRequest] ],
    transformResponse: [ [Function: transformResponse] ],
    timeout: 0,
    xsrfCookieName: 'XSRF-TOKEN',
    xsrfHeaderName: 'X-XSRF-TOKEN',
    maxContentLength: -1,
    maxBodyLength: -1,
    validateStatus: [Function: validateStatus],
    headers: {
      Accept: 'application/json, text/plain, */*',
      'Content-Type': 'application/json',
      'User-Agent': 'OpenAI/NodeJS/3.3.0',
      Authorization: 'Bearer <token>',
      'Content-Length': 1589
    },
    method: 'post',
    data: `{"model":"gpt-35-turbo","temperature":1,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"n":1,"stream":false,"messages":[{"role":"user","content":"You are a helpful AI customer support agent. Use the following pieces of context to answer the question at the end.\\nIf you don\\\\'t know the answer, just say you don\\\\'t know. DO NOT try to make up an answer.\\nIf the question is not related to the context, politely respond that you are tuned to only answer questions that are related to the context.\\n\\nto GitHub trending list world wide more than 7 times! </p> OpenChat is open source and trusted by thousands of people around the world, we made to GitHub trending list world wide more than 7 times! </p> Eden Marco</p> @EdenEmarco177</p> Just stumbled upon OpenChat update🤖 created by @ikbenbasha . OpenChat is an epic open source project that lets you: 1️⃣ Chat with your ANYTHING. 2️⃣ Create customer support chat widget for your web app! (with different personas) Super easy to set up-Implemented with @LangChainAI🦜🔗 and @pinecone🌲 vectorstore. If you want to learn @LangChainAI , Prompt Engineering, Retrieval, this repository is a great example of real world usage. @ikbenbasha nicely done!</p> DataChazGPT</p> @datachaz</p> #ChatGPT for your codebase! 🤯 You now can upload your entire codebase/Git repos to #OpenChat &amp; ask #GPT4 to implement anything! With the full context of your code, #OpenChat can answer your questions accurately and effortlessly! Powered by @LangChainAI! 🔥 Link\\n\\nQuestion: what is openchat?\\nHelpful answer in markdown:"}]}`,
    url: 'https://api.openai.com/v1/chat/completions'
  },
  request: <ref *1> ClientRequest {
    _events: [Object: null prototype] {
      abort: [Function (anonymous)],
      aborted: [Function (anonymous)],
      connect: [Function (anonymous)],
      error: [Function (anonymous)],
      socket: [Function (anonymous)],
      timeout: [Function (anonymous)],
      finish: [Function: requestOnFinish]
    },
    _eventsCount: 7,
    _maxListeners: undefined,
    outputData: [],
    outputSize: 0,
    writable: true,
    destroyed: false,
    _last: true,
    chunkedEncoding: false,
    shouldKeepAlive: false,
    maxRequestsOnConnectionReached: false,
    _defaultKeepAlive: true,
    useChunkedEncodingByDefault: true,
    sendDate: false,
    _removedConnection: false,
    _removedContLen: false,
    _removedTE: false,
    strictContentLength: false,
    _contentLength: 1589,
    _hasBody: true,
    _trailer: '',
    finished: true,
    _headerSent: true,
    _closed: false,
    socket: TLSSocket {
      _tlsOptions: [Object],
      _secureEstablished: true,
      _securePending: false,
      _newSessionPending: false,
      _controlReleased: true,
      secureConnecting: false,
      _SNICallback: null,
      servername: 'api.openai.com',
      alpnProtocol: false,
      authorized: true,
      authorizationError: null,
      encrypted: true,
      _events: [Object: null prototype],
      _eventsCount: 10,
      connecting: false,
      _hadError: false,
      _parent: null,
      _host: 'api.openai.com',
      _closeAfterHandlingError: false,
      _readableState: [ReadableState],
      _maxListeners: undefined,
      _writableState: [WritableState],
      allowHalfOpen: false,
      _sockname: null,
      _pendingData: null,
      _pendingEncoding: '',
      server: undefined,
      _server: null,
      ssl: [TLSWrap],
      _requestCert: true,
      _rejectUnauthorized: true,
      parser: null,
      _httpMessage: [Circular *1],
      [Symbol(res)]: [TLSWrap],
      [Symbol(verified)]: true,
      [Symbol(pendingSession)]: null,
      [Symbol(async_id_symbol)]: 18709,
      [Symbol(kHandle)]: [TLSWrap],
      [Symbol(lastWriteQueueSize)]: 0,
      [Symbol(timeout)]: null,
      [Symbol(kBuffer)]: null,
      [Symbol(kBufferCb)]: null,
      [Symbol(kBufferGen)]: null,
      [Symbol(kCapture)]: false,
      [Symbol(kSetNoDelay)]: false,
      [Symbol(kSetKeepAlive)]: true,
      [Symbol(kSetKeepAliveInitialDelay)]: 60,
      [Symbol(kBytesRead)]: 0,
      [Symbol(kBytesWritten)]: 0,
      [Symbol(connect-options)]: [Object]
    },
    _header: 'POST /v1/chat/completions HTTP/1.1\r\n' +
      'Accept: application/json, text/plain, */*\r\n' +
      'Content-Type: application/json\r\n' +
      'User-Agent: OpenAI/NodeJS/3.3.0\r\n' +
      'Authorization: Bearer <token>\r\n' +
      'Content-Length: 1589\r\n' +
      'Host: api.openai.com\r\n' +
      'Connection: close\r\n' +
      '\r\n',
    _keepAliveTimeout: 0,
    _onPendingData: [Function: nop],
    agent: Agent {
      _events: [Object: null prototype],
      _eventsCount: 2,
      _maxListeners: undefined,
      defaultPort: 443,
      protocol: 'https:',
      options: [Object: null prototype],
      requests: [Object: null prototype] {},
      sockets: [Object: null prototype],
      freeSockets: [Object: null prototype] {},
      keepAliveMsecs: 1000,
      keepAlive: false,
      maxSockets: Infinity,
      maxFreeSockets: 256,
      scheduling: 'lifo',
      maxTotalSockets: Infinity,
      totalSocketCount: 1,
      maxCachedSessions: 100,
      _sessionCache: [Object],
      [Symbol(kCapture)]: false
    },
    socketPath: undefined,
    method: 'POST',
    maxHeaderSize: undefined,
    insecureHTTPParser: undefined,
    joinDuplicateHeaders: undefined,
    path: '/v1/chat/completions',
    _ended: true,
    res: IncomingMessage {
      _readableState: [ReadableState],
      _events: [Object: null prototype],
      _eventsCount: 4,
      _maxListeners: undefined,
      socket: [TLSSocket],
      httpVersionMajor: 1,
      httpVersionMinor: 1,
      httpVersion: '1.1',
      complete: true,
      rawHeaders: [Array],
      rawTrailers: [],
      joinDuplicateHeaders: undefined,
      aborted: false,
      upgrade: false,
      url: '',
      method: null,
      statusCode: 404,
      statusMessage: 'Not Found',
      client: [TLSSocket],
      _consuming: false,
      _dumped: false,
      req: [Circular *1],
      responseUrl: 'https://api.openai.com/v1/chat/completions',
      redirects: [],
      [Symbol(kCapture)]: false,
      [Symbol(kHeaders)]: [Object],
      [Symbol(kHeadersCount)]: 22,
      [Symbol(kTrailers)]: null,
      [Symbol(kTrailersCount)]: 0
    },
    aborted: false,
    timeoutCb: null,
    upgradeOrConnect: false,
    parser: null,
    maxHeadersCount: null,
    reusedSocket: false,
    host: 'api.openai.com',
    protocol: 'https:',
    _redirectable: Writable {
      _writableState: [WritableState],
      _events: [Object: null prototype],
      _eventsCount: 3,
      _maxListeners: undefined,
      _options: [Object],
      _ended: true,
      _ending: true,
      _redirectCount: 0,
      _redirects: [],
      _requestBodyLength: 1589,
      _requestBodyBuffers: [],
      _onNativeResponse: [Function (anonymous)],
      _currentRequest: [Circular *1],
      _currentUrl: 'https://api.openai.com/v1/chat/completions',
      [Symbol(kCapture)]: false
    },
    [Symbol(kCapture)]: false,
    [Symbol(kBytesWritten)]: 0,
    [Symbol(kNeedDrain)]: false,
    [Symbol(corked)]: 0,
    [Symbol(kOutHeaders)]: [Object: null prototype] {
      accept: [Array],
      'content-type': [Array],
      'user-agent': [Array],
      authorization: [Array],
      'content-length': [Array],
      host: [Array]
    },
    [Symbol(errored)]: null,
    [Symbol(kHighWaterMark)]: 16384,
    [Symbol(kRejectNonStandardBodyWrites)]: false,
    [Symbol(kUniqueHeaders)]: null
  },
  response: {
    status: 404,
    statusText: 'Not Found',
    headers: {
      date: 'Sat, 19 Aug 2023 15:52:48 GMT',
      'content-type': 'application/json; charset=utf-8',
      'content-length': '185',
      connection: 'close',
      vary: 'Origin',
      'x-request-id': '<some_id>',
      'strict-transport-security': 'max-age=15724800; includeSubDomains',
      'cf-cache-status': 'DYNAMIC',
      server: 'cloudflare',
      'cf-ray': '<some_id>',
      'alt-svc': 'h3=":443"; ma=86400'
    },
    config: {
      transitional: [Object],
      adapter: [Function: httpAdapter],
      transformRequest: [Array],
      transformResponse: [Array],
      timeout: 0,
      xsrfCookieName: 'XSRF-TOKEN',
      xsrfHeaderName: 'X-XSRF-TOKEN',
      maxContentLength: -1,
      maxBodyLength: -1,
      validateStatus: [Function: validateStatus],
      headers: [Object],
      method: 'post',
      data: `{"model":"gpt-35-turbo","temperature":1,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"n":1,"stream":false,"messages":[{"role":"user","content":"You are a helpful AI customer support agent. Use the following pieces of context to answer the question at the end.\\nIf you don\\\\'t know the answer, just say you don\\\\'t know. DO NOT try to make up an answer.\\nIf the question is not related to the context, politely respond that you are tuned to only answer questions that are related to the context.\\n\\nto GitHub trending list world wide more than 7 times! </p> OpenChat is open source and trusted by thousands of people around the world, we made to GitHub trending list world wide more than 7 times! </p> Eden Marco</p> @EdenEmarco177</p> Just stumbled upon OpenChat update🤖 created by @ikbenbasha . OpenChat is an epic open source project that lets you: 1️⃣ Chat with your ANYTHING. 2️⃣ Create customer support chat widget for your web app! (with different personas) Super easy to set up-Implemented with @LangChainAI🦜🔗 and @pinecone🌲 vectorstore. If you want to learn @LangChainAI , Prompt Engineering, Retrieval, this repository is a great example of real world usage. @ikbenbasha nicely done!</p> DataChazGPT</p> @datachaz</p> #ChatGPT for your codebase! 🤯 You now can upload your entire codebase/Git repos to #OpenChat &amp; ask #GPT4 to implement anything! With the full context of your code, #OpenChat can answer your questions accurately and effortlessly! Powered by @LangChainAI! 🔥 Link\\n\\nQuestion: what is openchat?\\nHelpful answer in markdown:"}]}`,
      url: 'https://api.openai.com/v1/chat/completions'
    },
    request: <ref *1> ClientRequest {
      _events: [Object: null prototype],
      _eventsCount: 7,
      _maxListeners: undefined,
      outputData: [],
      outputSize: 0,
      writable: true,
      destroyed: false,
      _last: true,
      chunkedEncoding: false,
      shouldKeepAlive: false,
      maxRequestsOnConnectionReached: false,
      _defaultKeepAlive: true,
      useChunkedEncodingByDefault: true,
      sendDate: false,
      _removedConnection: false,
      _removedContLen: false,
      _removedTE: false,
      strictContentLength: false,
      _contentLength: 1589,
      _hasBody: true,
      _trailer: '',
      finished: true,
      _headerSent: true,
      _closed: false,
      socket: [TLSSocket],
      _header: 'POST /v1/chat/completions HTTP/1.1\r\n' +
        'Accept: application/json, text/plain, */*\r\n' +
        'Content-Type: application/json\r\n' +
        'User-Agent: OpenAI/NodeJS/3.3.0\r\n' +
        'Authorization: Bearer <token>\r\n' +
        'Content-Length: 1589\r\n' +
        'Host: api.openai.com\r\n' +
        'Connection: close\r\n' +
        '\r\n',
      _keepAliveTimeout: 0,
      _onPendingData: [Function: nop],
      agent: [Agent],
      socketPath: undefined,
      method: 'POST',
      maxHeaderSize: undefined,
      insecureHTTPParser: undefined,
      joinDuplicateHeaders: undefined,
      path: '/v1/chat/completions',
      _ended: true,
      res: [IncomingMessage],
      aborted: false,
      timeoutCb: null,
      upgradeOrConnect: false,
      parser: null,
      maxHeadersCount: null,
      reusedSocket: false,
      host: 'api.openai.com',
      protocol: 'https:',
      _redirectable: [Writable],
      [Symbol(kCapture)]: false,
      [Symbol(kBytesWritten)]: 0,
      [Symbol(kNeedDrain)]: false,
      [Symbol(corked)]: 0,
      [Symbol(kOutHeaders)]: [Object: null prototype],
      [Symbol(errored)]: null,
      [Symbol(kHighWaterMark)]: 16384,
      [Symbol(kRejectNonStandardBodyWrites)]: false,
      [Symbol(kUniqueHeaders)]: null
    },
    data: { error: [Object] }
  },
  isAxiosError: true,
  toJSON: [Function: toJSON],
  attemptNumber: 1,
  retriesLeft: 6
}
codebanesr commented 1 year ago

@Jchang4 ok I'll need to update the documentation, please use common.env file with the following env keys. You can ignore example.env files

common.env

OPENAI_API_KEY=

STORE=qdrant

#PINECONE_API_KEY=
#PINECONE_ENV=
#VECTOR_STORE_INDEX_NAME=

# QDRANT_URL
QDRANT_URL=http://qdrant:6333
codebanesr commented 1 year ago

https://github.com/openchatai/OpenChat/issues/139#issuecomment-1684714540 @spencerwongfeilong chat.js file should be updated with the ip address of the machine where the server resides. Check the network tab to make sure where your network calls are being forwarded

qlimpact commented 1 year ago

I got the same error "error sending the message." even with common.env file include those env keys.

@Jchang4 ok I'll need to update the documentation, please use common.env file with the following env keys. You can ignore example.env files

common.env

OPENAI_API_KEY=

STORE=pinecone

PINECONE_API_KEY=
PINECONE_ENV=
VECTOR_STORE_INDEX_NAME=

# QDRANT_URL
QDRANT_URL=http://qdrant:6333
codebanesr commented 1 year ago

@qlimpact you need to delete the old llm-server image and then rebuild it. Also make sure, STORE=qdrant, if you are using qdrant use the docker dashboard or docker container rm -f $(docker image ls -aq). If you still face problem please post the llm-server log here

OPENAI_API_KEY=

STORE=qdrant

# QDRANT_URL
QDRANT_URL=http://qdrant:6333
qlimpact commented 1 year ago

still same issue with the env setting

here is the log:

docker-compose logs -f llm-server

openchat-llm-server-1 | openchat-llm-server-1 | openchat-llm-server-1 | > openchat-llm-server@0.0.1 dev openchat-llm-server-1 | > next dev openchat-llm-server-1 | openchat-llm-server-1 | openchat-llm-server-1 | ready - started server on 0.0.0.0:3000, url: http://localhost:3000 openchat-llm-server-1 | info - Loaded env from /usr/src/app/.env openchat-llm-server-1 | Attention: Next.js now collects completely anonymous telemetry regarding usage. openchat-llm-server-1 | This information is used to shape Next.js' roadmap and prioritize features. openchat-llm-server-1 | You can learn more, including how to opt-out if you'd not like to participate in this anonymous program, by visiting the following URL: openchat-llm-server-1 | https://nextjs.org/telemetry openchat-llm-server-1 | openchat-llm-server-1 | openchat-llm-server-1 | event - compiled client and server successfully in 1687 ms (154 modules) openchat-llm-server-1 | wait - compiling /api/ingest (client and server)... openchat-llm-server-1 | event - compiled successfully in 180 ms (58 modules) openchat-llm-server-1 | [WARN] Importing from 'langchain/document_loaders' is deprecated. Import from eg. 'langchain/document_loaders/fs/text' or 'langchain/document_loaders/web/cheerio' instead. See https://js.langchain.com/docs/getting-started/install#updating-from-0052 for upgrade instructions. openchat-llm-server-1 | All is done, folder deleted

@qlimpact you need to delete the old llm-server image and then rebuild it. Also make sure, STORE=qdrant, if you are using qdrant use the docker dashboard or docker container rm -f $(docker image ls -aq). If you still face problem please post the llm-server log here

OPENAI_API_KEY=

STORE=qdrant

# QDRANT_URL
QDRANT_URL=http://qdrant:6333
codebanesr commented 1 year ago

@qlimpact @Jchang4 We've just rolled out ready-to-use backend server images. Kindly pull from the main branch, make sure to adjust "common.env" if necessary, and initiate the installation with make install. Should you encounter any challenges, don't hesitate to reach out – I'm here to help!

spencerwongfeilong commented 1 year ago

#139 (comment) @spencerwongfeilong chat.js file should be updated with the ip address of the machine where the server resides. Check the network tab to make sure where your network calls are being forwarded

Hi I checked the network calls. It seems to me that are not being forwarded. How do I update the chat.js file? Replace all the text with localhost to the ip of the server?

codebanesr commented 1 year ago

@spencerwongfeilong In backend-server/public/chat.js, change http://localhost:8000 with the ip address where your remote server is hosted

qlimpact commented 1 year ago

Thank you!

I did a fresh pull and make install on a fresh new server. But still the same error. I did change "backend-server/public/chat.js, change http://localhost:8000 with the ip address where your remote server is hosted" as well.

from the browser console the send /api/chat has following error:

{ "message": "cURL error 6: Could not resolve host: llm-server (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) for http://llm-server:3000/api/chat", "exception": "Illuminate\Http\Client\ConnectionException", "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Http/Client/PendingRequest.php", "line": 856, "trace": [ { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Support/helpers.php", "line": 248, "function": "Illuminate\Http\Client\{closure}", "class": "Illuminate\Http\Client\PendingRequest", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Http/Client/PendingRequest.php", "line": 864, "function": "retry" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Http/Client/PendingRequest.php", "line": 729, "function": "send", "class": "Illuminate\Http\Client\PendingRequest", "type": "->" }, { "file": "/var/www/html/app/Http/Api/Controllers/MessageController.php", "line": 114, "function": "post", "class": "Illuminate\Http\Client\PendingRequest", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Routing/Controller.php", "line": 54, "function": "sendChat", "class": "App\Http\Api\Controllers\MessageController", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Routing/ControllerDispatcher.php", "line": 43, "function": "callAction", "class": "Illuminate\Routing\Controller", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Routing/Route.php", "line": 260, "function": "dispatch", "class": "Illuminate\Routing\ControllerDispatcher", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Routing/Route.php", "line": 205, "function": "runController", "class": "Illuminate\Routing\Route", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Routing/Router.php", "line": 798, "function": "run", "class": "Illuminate\Routing\Route", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php", "line": 141, "function": "Illuminate\Routing\{closure}", "class": "Illuminate\Routing\Router", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Routing/Middleware/SubstituteBindings.php", "line": 50, "function": "Illuminate\Pipeline\{closure}", "class": "Illuminate\Pipeline\Pipeline", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php", "line": 180, "function": "handle", "class": "Illuminate\Routing\Middleware\SubstituteBindings", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Routing/Middleware/ThrottleRequests.php", "line": 152, "function": "Illuminate\Pipeline\{closure}", "class": "Illuminate\Pipeline\Pipeline", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Routing/Middleware/ThrottleRequests.php", "line": 128, "function": "handleRequest", "class": "Illuminate\Routing\Middleware\ThrottleRequests", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Routing/Middleware/ThrottleRequests.php", "line": 80, "function": "handleRequestUsingNamedLimiter", "class": "Illuminate\Routing\Middleware\ThrottleRequests", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php", "line": 180, "function": "handle", "class": "Illuminate\Routing\Middleware\ThrottleRequests", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php", "line": 116, "function": "Illuminate\Pipeline\{closure}", "class": "Illuminate\Pipeline\Pipeline", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Routing/Router.php", "line": 799, "function": "then", "class": "Illuminate\Pipeline\Pipeline", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Routing/Router.php", "line": 776, "function": "runRouteWithinStack", "class": "Illuminate\Routing\Router", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Routing/Router.php", "line": 740, "function": "runRoute", "class": "Illuminate\Routing\Router", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Routing/Router.php", "line": 729, "function": "dispatchToRoute", "class": "Illuminate\Routing\Router", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Foundation/Http/Kernel.php", "line": 200, "function": "dispatch", "class": "Illuminate\Routing\Router", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php", "line": 141, "function": "Illuminate\Foundation\Http\{closure}", "class": "Illuminate\Foundation\Http\Kernel", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/TransformsRequest.php", "line": 21, "function": "Illuminate\Pipeline\{closure}", "class": "Illuminate\Pipeline\Pipeline", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/ConvertEmptyStringsToNull.php", "line": 31, "function": "handle", "class": "Illuminate\Foundation\Http\Middleware\TransformsRequest", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php", "line": 180, "function": "handle", "class": "Illuminate\Foundation\Http\Middleware\ConvertEmptyStringsToNull", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/TransformsRequest.php", "line": 21, "function": "Illuminate\Pipeline\{closure}", "class": "Illuminate\Pipeline\Pipeline", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/TrimStrings.php", "line": 40, "function": "handle", "class": "Illuminate\Foundation\Http\Middleware\TransformsRequest", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php", "line": 180, "function": "handle", "class": "Illuminate\Foundation\Http\Middleware\TrimStrings", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/ValidatePostSize.php", "line": 27, "function": "Illuminate\Pipeline\{closure}", "class": "Illuminate\Pipeline\Pipeline", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php", "line": 180, "function": "handle", "class": "Illuminate\Foundation\Http\Middleware\ValidatePostSize", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/PreventRequestsDuringMaintenance.php", "line": 86, "function": "Illuminate\Pipeline\{closure}", "class": "Illuminate\Pipeline\Pipeline", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php", "line": 180, "function": "handle", "class": "Illuminate\Foundation\Http\Middleware\PreventRequestsDuringMaintenance", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Http/Middleware/HandleCors.php", "line": 62, "function": "Illuminate\Pipeline\{closure}", "class": "Illuminate\Pipeline\Pipeline", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php", "line": 180, "function": "handle", "class": "Illuminate\Http\Middleware\HandleCors", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Http/Middleware/TrustProxies.php", "line": 39, "function": "Illuminate\Pipeline\{closure}", "class": "Illuminate\Pipeline\Pipeline", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php", "line": 180, "function": "handle", "class": "Illuminate\Http\Middleware\TrustProxies", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php", "line": 116, "function": "Illuminate\Pipeline\{closure}", "class": "Illuminate\Pipeline\Pipeline", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Foundation/Http/Kernel.php", "line": 175, "function": "then", "class": "Illuminate\Pipeline\Pipeline", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Foundation/Http/Kernel.php", "line": 144, "function": "sendRequestThroughRouter", "class": "Illuminate\Foundation\Http\Kernel", "type": "->" }, { "file": "/var/www/html/public/index.php", "line": 52, "function": "handle", "class": "Illuminate\Foundation\Http\Kernel", "type": "->" }, { "file": "/var/www/html/vendor/laravel/framework/src/Illuminate/Foundation/resources/server.php", "line": 16, "function": "require_once" } ] }

@qlimpact @Jchang4 We've just rolled out ready-to-use backend server images. Kindly pull from the main branch, make sure to adjust "common.env" if necessary, and initiate the installation with make install. Should you encounter any challenges, don't hesitate to reach out – I'm here to help!

qlimpact commented 1 year ago

@codebanesr and the following is the log:

docker-compose logs -f llm-server openchat-llm-server-1 | exec /usr/local/bin/docker-entrypoint.sh: exec format error openchat-llm-server-1 | exec /usr/local/bin/docker-entrypoint.sh: exec format error openchat-llm-server-1 | exec /usr/local/bin/docker-entrypoint.sh: exec format error openchat-llm-server-1 | exec /usr/local/bin/docker-entrypoint.sh: exec format error openchat-llm-server-1 | exec /usr/local/bin/docker-entrypoint.sh: exec format error openchat-llm-server-1 | exec /usr/local/bin/docker-entrypoint.sh: exec format error openchat-llm-server-1 | exec /usr/local/bin/docker-entrypoint.sh: exec format error openchat-llm-server-1 | exec /usr/local/bin/docker-entrypoint.sh: exec format error openchat-llm-server-1 | exec /usr/local/bin/docker-entrypoint.sh: exec format error openchat-llm-server-1 | exec /usr/local/bin/docker-entrypoint.sh: exec format error openchat-llm-server-1 | exec /usr/local/bin/docker-entrypoint.sh: exec format error openchat-llm-server-1 | exec /usr/local/bin/docker-entrypoint.sh: exec format error openchat-llm-server-1 | exec /usr/local/bin/docker-entrypoint.sh: exec format error openchat-llm-server-1 | exec /usr/local/bin/docker-entrypoint.sh: exec format error

spencerwongfeilong commented 1 year ago

@spencerwongfeilong In backend-server/public/chat.js, change http://localhost:8000 with the ip address where your remote server is hosted

Hi. It works. I can now send queries from a different device within the network.

I also tried to use cloudflare tunnel to put openchat on the Internet but encountered similar problem, not able to send queries. May I know where else I should configure to overcome the problem.

codebanesr commented 1 year ago

@spencerwongfeilong , it appears that your LLM server is currently inaccessible from the backend server. Would you be open to joining our Discord channel? I'd appreciate the opportunity to understand the issue you're encountering through a screen sharing call, if that's feasible for you.

spencerwongfeilong commented 1 year ago

@spencerwongfeilong , it appears that your LLM server is currently inaccessible from the backend server. Would you be open to joining our Discord channel? I'd appreciate the opportunity to understand the issue you're encountering through a screen sharing call, if that's feasible for you.

Sure. I like to join your discord channel. May I know the link to your discord channel please.

codebanesr commented 1 year ago

https://discord.gg/9hxrnC2k

On Tue, Aug 22, 2023, 08:29 spencerwongfeilong @.***> wrote:

@spencerwongfeilong https://github.com/spencerwongfeilong , it appears that your LLM server is currently inaccessible from the backend server. Would you be open to joining our Discord channel? I'd appreciate the opportunity to understand the issue you're encountering through a screen sharing call, if that's feasible for you.

Sure. I like to join your discord channel. May I know the link to your discord channel please.

— Reply to this email directly, view it on GitHub https://github.com/openchatai/OpenChat/issues/139#issuecomment-1687460636, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEI5ZGV4P6EKKBXQANGHLKLXWQ7U5ANCNFSM6AAAAAA3TI6X2M . You are receiving this because you were mentioned.Message ID: @.***>