gptscript-ai / gptscript

Build AI assistants that interact with your systems
https://gptscript.ai
Apache License 2.0
2.89k stars 256 forks source link

UI - No indication to the user when error is encountered during LLM calls. #569

Closed sangee2004 closed 1 month ago

sangee2004 commented 2 months ago

Server - gptscript version v0.0.0-dev-53f7fbde-dirty

Steps to reproduce the problem:

  1. Tried to launch UI for digital-ocean-agent which resulted in encountering Github Rate limit error when trying to get the credential helper commit.

    2024/06/26 15:07:32 failed to get GitHub commit of gptscript-ai/gptscript-credential-helpers at HEAD (fallback error failed to find remote "https://github.com/gptscript-ai/gptscript-credential-helpers.git" as "HEAD"): 403 Forbidden {"message":"API rate limit exceeded for 107.194.201.89. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)","documentation_url":"https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting"}
  2. UI was launched and it was stuck in "Waiting for model response..." state with no indication about the errors seen.

  3. Following errors was seen in console:

    ⨯ unhandledRejection: error, status code: 500, message: Internal Server Error
    ⨯ unhandledRejection: error, status code: 500, message: Internal Server Error
gptscript --default-model 'claude-3-5-sonnet-20240620 from github.com/gptscript-ai/claude3-anthropic-provider' --ui github.com/gptscript-ai/digital-ocean-agent
15:03:38 WARNING: Changing the default model can have unknown behavior for existing tools. Use the model field per tool instead.
15:03:39 started  [main] [input=--file=github.com/gptscript-ai/digital-ocean-agent]
15:03:39 started  [context: github.com/gptscript-ai/context/os]
15:03:39 sent     [context: github.com/gptscript-ai/context/os]
15:03:39 ended    [context: github.com/gptscript-ai/context/os] [output=The local operating systems is Darwin, release 23.3.0]
15:03:39 started  [provider: https://raw.githubusercontent.com/gptscript-ai/claude3-anthropic-provider/5b581e7b84bdd99f5d56df593d99397c9f91a8e3/tool.gpt:Anthropic Claude3 Model Provider]
15:03:39 launched [Anthropic Claude3 Model Provider][https://raw.githubusercontent.com/gptscript-ai/claude3-anthropic-provider/5b581e7b84bdd99f5d56df593d99397c9f91a8e3/tool.gpt:Anthropic Claude3 Model Provider] port [10841] [/usr/local/bin/gptscript sys.daemon /usr/bin/env python3 /Users/sangeethahariharan/Library/Caches/gptscript/repos/5b581e7b84bdd99f5d56df593d99397c9f91a8e3/tool.gpt/python3.12/main.py]
15:03:40 ended    [provider: https://raw.githubusercontent.com/gptscript-ai/claude3-anthropic-provider/5b581e7b84bdd99f5d56df593d99397c9f91a8e3/tool.gpt:Anthropic Claude3 Model Provider] [output=http://127.0.0.1:10841]
15:03:40 sent     [main]
         content  [2] content | The local operating systems is Darwin, release 23.3.0
         content  [2] content | 
15:03:40 started  [service(4)] [input=null]
15:03:40 launched [service][https://raw.githubusercontent.com/gptscript-ai/ui/a0f18f328f16854a54bcf6210c2c80cf36a38c9e/tool.gpt:service] port [10513] [/usr/local/bin/gptscript sys.daemon /usr/bin/env npm run --prefix /Users/sangeethahariharan/Library/Caches/gptscript/repos/a0f18f328f16854a54bcf6210c2c80cf36a38c9e/tool.gpt/node21 dev]

> next-app-template@0.0.1 dev
> node server.mjs

> Socket server is ready at http://localhost:10513
 ○ Compiling / ...
 ✓ Compiled / in 2.1s (4300 modules)
 GET / 200 in 2368ms
15:03:44 ended    [service(4)] [output=\u003c!DOCTYPE html\u003e\u003chtml lang=\"en\"\u003e\u003chead\u003e\u003cmeta charSet=\"utf-8\"/\u003e\u003cmeta name=\"viewport\" content=\"width=dev...]
 POST / 200 in 17ms
15:03:44 continue [main]
15:03:44 started  [context: github.com/gptscript-ai/context/os]
15:03:44 sent     [context: github.com/gptscript-ai/context/os]
15:03:44 ended    [context: github.com/gptscript-ai/context/os] [output=The local operating systems is Darwin, release 23.3.0]
15:03:44 sent     [main]
         content  [1] content | Waiting for model response...
         content  [1] content | <tool call> port -> null         content  [5] content | The local operating systems is Darwin, release 23.3.0
         content  [5] content | 

15:03:47 started  [port(6)] [input=null]
 ✓ Compiled /api/port in 304ms (2304 modules)
15:03:47 ended    [port(6)] [output=10513]
 POST /api/port 200 in 344ms
15:03:47 continue [main]
15:03:47 started  [context: github.com/gptscript-ai/context/os]
15:03:47 sent     [context: github.com/gptscript-ai/context/os]
15:03:47 ended    [context: github.com/gptscript-ai/context/os] [output=The local operating systems is Darwin, release 23.3.0]
15:03:47 sent     [main]
         content  [1] content | Waiting for model response...
         content  [1] content | <tool call> openFileNix -> {"file": "github.com/gptscript-ai/digital-ocean-agent", "port": "10513"}         content  [7] content | The local operating systems is Darwin, release 23.3.0
         content  [7] content | 

15:03:50 started  [open-file-nix(8)] [input={"file": "github.com/gptscript-ai/digital-ocean-agent", "port": "10513"}]
15:03:50 sent     [open-file-nix(8)]
15:03:50 ended    [open-file-nix(8)]
15:03:50 continue [main]
15:03:50 started  [context: github.com/gptscript-ai/context/os]
15:03:50 sent     [context: github.com/gptscript-ai/context/os]
15:03:50 ended    [context: github.com/gptscript-ai/context/os] [output=The local operating systems is Darwin, release 23.3.0]
15:03:50 sent     [main]
         content  [1] content | Waiting for model response... ○ Compiling /run ...
 ✓ Compiled /run in 900ms (4999 modules)
 GET /run?file=github.com/gptscript-ai/digital-ocean-agent 200 in 1276ms

         content  [1] content | The task has been completed. The file "github.com/gptscript-ai/digital-ocean-agent" has been opened using the openFileNix tool with the port 10513.
         content  [9] content | The local operating systems is Darwin, release 23.3.0
         content  [9] content | 
15:03:53 ended    [main] [output=The task has been completed. The file \"github.com/gptscript-ai/digital-ocean-agent\" has been opened...]

INPUT:

--file=github.com/gptscript-ai/digital-ocean-agent

OUTPUT:

The task has been completed. The file "github.com/gptscript-ai/digital-ocean-agent" has been opened using the openFileNix tool with the port 10513.
 POST /run?file=github.com/gptscript-ai/digital-ocean-agent 200 in 1795ms
 POST /run?file=github.com/gptscript-ai/digital-ocean-agent 200 in 6ms
 POST /run?file=github.com/gptscript-ai/digital-ocean-agent 200 in 5ms
 POST /run?file=github.com/gptscript-ai/digital-ocean-agent 200 in 20ms
 POST /run?file=github.com/gptscript-ai/digital-ocean-agent 200 in 7ms
 ⨯ unhandledRejection: error, status code: 500, message: Internal Server Error
 ⨯ unhandledRejection: error, status code: 500, message: Internal Server Error

Expected Behavior: UI should be able to show error message to the user.

sangee2004 commented 1 month ago

Tested with latest ui build.

User is presented with the error message that LLM calls encounter as the chat response.

User is also allowed to continue the chat after this error , if they choose to.