FlowiseAI / Flowise

Drag & drop UI to build your customized LLM flow
https://flowiseai.com
Apache License 2.0
31.31k stars 16.31k forks source link

[BUG] Chatflow Agent hangs on Groq LLM query, requires manual chat window refresh to display response #3043

Open burny91 opened 2 months ago

burny91 commented 2 months ago

Describe the bug If I prompt my chatflow agent with groq as llm and prompt it "what divsion equals to 1337?" the chat window shows the "thinking" animation, stop and does not provide an answer. Only if I close the chat windows and re-open it. The chat history seems to be loaded and I do see the latest answer from the agent. This issue continues on every prompt I perform. Tested locally and my PaaS system onrender with latest version of flowise@2.0.5.

This behaviour started, today 2024-08-20, after syncing my flowise fork.

Any hint is very much apreciated.

Cheers Christian

To Reproduce Steps to reproduce the behavior:

  1. Go to chatflows.
  2. Load my simple agent (see chatflow file below) with a calculator
  3. Open the chat window and ask whatever question (e.g. what divsion equals to 1337?)
  4. Rendering animation stop and no response is show in the chat log
  5. Close the chat window by clicking on the purple close window button
  6. Open the chat again by clicking on the purple chat buttom
  7. Chatlog updates and renders the answer from the agent

Expected behavior

  1. Open the chatlog
  2. Enter a prompt
  3. LLM / Agent flow generates a response
  4. Response is printed in the chat log immediatly
  5. User can prompt again

Screenshots If applicable, add screenshots to help explain your problem.

Chatflow setup image

First prompt with loading animation image

Rendering completed - no answer in chat log image

Close and Re-open of chat window. Now able to see agent response image

Flow If applicable, add exported flow in order to help replicating the problem.

Use https://console.groq.com/ to grab some api keys.

Agent Test Chatflow.json

Setup

Additional context Add any other context about the problem here.

NPX Log

PS C:\Users\***> npx flowise start --DEBUG=true
2024-08-20 15:08:30 [INFO]: Starting Flowise...
2024-08-20 15:08:30 [INFO]: 📦 [server]: Data Source is initializing...
2024-08-20 15:08:34 [INFO]: 📦 [server]: Data Source has been initialized!
2024-08-20 15:08:34 [INFO]: ⚡️ [server]: Flowise Server is listening at :3000
2024-08-20 15:08:41 [INFO]: ⬆️ POST /api/v1/node-load-method/groqChat
2024-08-20 15:08:48 [INFO]: ❌ DELETE /api/v1/chatmessage/4b57a91d-d28a-4c92-a748-493454a3ec98?chatId=a45f40d5-d5fe-4907-879e-633899d69785&chatType=INTERNAL
2024-08-20 15:09:02 [INFO]: ⬆️ POST /api/v1/internal-prediction/4b57a91d-d28a-4c92-a748-493454a3ec98
2024-08-20 15:09:02 [INFO]: [server]: Chatflow 4b57a91d-d28a-4c92-a748-493454a3ec98 added into ChatflowPool
[chain/start] [1:chain:AgentExecutor] Entering Chain run with input: {
  "input": "What division equals 1337?"
}
[chain/start] [1:chain:AgentExecutor > 2:chain:RunnableAgent] Entering Chain run with input: {
  "input": "What division equals 1337?",
  "steps": []
}
[chain/start] [1:chain:AgentExecutor > 2:chain:RunnableAgent > 3:chain:RunnableMap] Entering Chain run with input: {
  "input": ""
}
[chain/start] [1:chain:AgentExecutor > 2:chain:RunnableAgent > 3:chain:RunnableMap > 4:chain:RunnableLambda] Entering Chain run with input: {
  "input": ""
}
[chain/start] [1:chain:AgentExecutor > 2:chain:RunnableAgent > 3:chain:RunnableMap > 5:chain:RunnableLambda] Entering Chain run with input: {
  "input": ""
}
[chain/start] [1:chain:AgentExecutor > 2:chain:RunnableAgent > 3:chain:RunnableMap > 6:chain:RunnableLambda] Entering Chain run with input: {
  "input": ""
}
[chain/end] [1:chain:AgentExecutor > 2:chain:RunnableAgent > 3:chain:RunnableMap > 4:chain:RunnableLambda] [12ms] Exiting Chain run with output: {
  "output": "What division equals 1337?"
}
[chain/end] [1:chain:AgentExecutor > 2:chain:RunnableAgent > 3:chain:RunnableMap > 5:chain:RunnableLambda] [13ms] Exiting Chain run with output: {
  "output": []
}
[chain/end] [1:chain:AgentExecutor > 2:chain:RunnableAgent > 3:chain:RunnableMap > 6:chain:RunnableLambda] [15ms] Exiting Chain run with output: {
  "output": []
}
[chain/end] [1:chain:AgentExecutor > 2:chain:RunnableAgent > 3:chain:RunnableMap] [19ms] Exiting Chain run with output: {
  "input": "What division equals 1337?",
  "agent_scratchpad": [],
  "chat_history": []
}
[chain/start] [1:chain:AgentExecutor > 2:chain:RunnableAgent > 7:prompt:ChatPromptTemplate] Entering Chain run with input: {
  "input": "What division equals 1337?",
  "agent_scratchpad": [],
  "chat_history": []
}
[chain/end] [1:chain:AgentExecutor > 2:chain:RunnableAgent > 7:prompt:ChatPromptTemplate] [2ms] Exiting Chain run with output: {
  "lc": 1,
  "type": "constructor",
  "id": [
    "langchain_core",
    "prompt_values",
    "ChatPromptValue"
  ],
  "kwargs": {
    "messages": [
      {
        "lc": 1,
        "type": "constructor",
        "id": [
          "langchain_core",
          "messages",
          "SystemMessage"
        ],
        "kwargs": {
          "content": "You are a helpful AI assistant.",
          "additional_kwargs": {},
          "response_metadata": {}
        }
      },
      {
        "lc": 1,
        "type": "constructor",
        "id": [
          "langchain_core",
          "messages",
          "HumanMessage"
        ],
        "kwargs": {
          "content": "What division equals 1337?",
          "additional_kwargs": {},
          "response_metadata": {}
        }
      }
    ]
  }
}
[llm/start] [1:chain:AgentExecutor > 2:chain:RunnableAgent > 8:llm:ChatGroq] Entering LLM run with input: {
  "messages": [
    [
      {
        "lc": 1,
        "type": "constructor",
        "id": [
          "langchain_core",
          "messages",
          "SystemMessage"
        ],
        "kwargs": {
          "content": "You are a helpful AI assistant.",
          "additional_kwargs": {},
          "response_metadata": {}
        }
      },
      {
        "lc": 1,
        "type": "constructor",
        "id": [
          "langchain_core",
          "messages",
          "HumanMessage"
        ],
        "kwargs": {
          "content": "What division equals 1337?",
          "additional_kwargs": {},
          "response_metadata": {}
        }
      }
    ]
  ]
}
Groq:DEBUG:request https://api.groq.com/openai/v1/chat/completions {
  method: 'post',
  path: '/openai/v1/chat/completions',
  body: {
    tools: [ [Object] ],
    stop: [],
    model: 'llama-3.1-70b-versatile',
    temperature: 0.9,
    max_tokens: undefined,
    stream: false,
    messages: [ [Object], [Object] ]
  },
  signal: undefined,
  headers: undefined,
  stream: false
} {
  'content-length': '861',
  accept: 'application/json',
  'content-type': 'application/json',
  'user-agent': 'Groq/JS 0.3.3',
  'x-stainless-lang': 'js',
  'x-stainless-package-version': '0.3.3',
  'x-stainless-os': 'Windows',
  'x-stainless-arch': 'x64',
  'x-stainless-runtime': 'node',
  'x-stainless-runtime-version': 'v20.16.0',
  authorization: 'Bearer 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'
}
Groq:DEBUG:response 200 https://api.groq.com/openai/v1/chat/completions Headers {
  [Symbol(map)]: [Object: null prototype] {
    date: [ 'Tue, 20 Aug 2024 13:09:03 GMT' ],
    'content-type': [ 'application/json' ],
    'transfer-encoding': [ 'chunked' ],
    connection: [ 'keep-alive' ],
    'cache-control': [ 'private, max-age=0, no-store, no-cache, must-revalidate' ],
    vary: [ 'Origin' ],
    'x-ratelimit-limit-requests': [ '14400' ],
    'x-ratelimit-limit-tokens': [ '131072' ],
    'x-ratelimit-remaining-requests': [ '14399' ],
    'x-ratelimit-remaining-tokens': [ '131049' ],
    'x-ratelimit-reset-requests': [ '6s' ],
    'x-ratelimit-reset-tokens': [ '10.528564ms' ],
    'x-request-id': [ 'req_01j5qyfe35ezc8mgc5he3sxspq' ],
    via: [ '1.1 google' ],
    'alt-svc': [ 'h3=":443"; ma=86400' ],
    'cf-cache-status': [ 'DYNAMIC' ],
    'set-cookie': [
      '__cf_bm=y5TT5tSPrTi8iKCjsL937v.Zaes6ACFB4GL7FADshhc-1724159343-1.0.1.1-3xualzs_2sbxNmKl8Cz03vEbSTMaNaVJ6VK3suT4607qLjIYQS2Waa7J1UhCnv19TRcY6hoGBj5HtGdLry_1Eg; path=/; expires=Tue, 20-Aug-24 13:39:03 GMT; domain=.groq.com; HttpOnly; Secure; SameSite=None'
    ],
    server: [ 'cloudflare' ],
    'cf-ray': [ '8b629d934c0b18af-FRA' ],
    'content-encoding': [ 'gzip' ]
  }
} {
  id: 'chatcmpl-6b63e43f-88d4-442a-a372-ea2419ed0c01',
  object: 'chat.completion',
  created: 1724159343,
  model: 'llama-3.1-70b-versatile',
  choices: [
    {
      index: 0,
      message: [Object],
      logprobs: null,
      finish_reason: 'stop'
    }
  ],
  usage: {
    queue_time: 0.382336304,
    prompt_tokens: 288,
    prompt_time: 0.08153632,
    completion_tokens: 45,
    completion_time: 0.18,
    total_tokens: 333,
    total_time: 0.26153632
  },
  system_fingerprint: 'fp_b3ae7e594e',
  x_groq: { id: 'req_01j5qyfe35ezc8mgc5he3sxspq' }
}
[llm/end] [1:chain:AgentExecutor > 2:chain:RunnableAgent > 8:llm:ChatGroq] [1.52s] Exiting LLM run with output: {
  "generations": [
    [
      {
        "text": "<calculator> {\"input\": \"2 / (1 / 668.5)\"} </calculator>\n or  \n<calculator> {\"input\": \"1.337 * (1/1.337)\"}</calculator>",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain_core",
            "messages",
            "AIMessageChunk"
          ],
          "kwargs": {
            "content": "<calculator> {\"input\": \"2 / (1 / 668.5)\"} </calculator>\n or  \n<calculator> {\"input\": \"1.337 * (1/1.337)\"}</calculator>",
            "additional_kwargs": {},
            "tool_call_chunks": [],
            "tool_calls": [],
            "invalid_tool_calls": [],
            "response_metadata": {}
          }
        }
      }
    ]
  ]
}
[chain/start] [1:chain:AgentExecutor > 2:chain:RunnableAgent > 9:parser:ToolCallingAgentOutputParser] Entering Chain run with input: {
  "lc": 1,
  "type": "constructor",
  "id": [
    "langchain_core",
    "messages",
    "AIMessageChunk"
  ],
  "kwargs": {
    "content": "<calculator> {\"input\": \"2 / (1 / 668.5)\"} </calculator>\n or  \n<calculator> {\"input\": \"1.337 * (1/1.337)\"}</calculator>",
    "additional_kwargs": {},
    "tool_call_chunks": [],
    "tool_calls": [],
    "invalid_tool_calls": [],
    "response_metadata": {}
  }
}
[chain/end] [1:chain:AgentExecutor > 2:chain:RunnableAgent > 9:parser:ToolCallingAgentOutputParser] [1ms] Exiting Chain run with output: {
  "returnValues": {
    "output": "<calculator> {\"input\": \"2 / (1 / 668.5)\"} </calculator>\n or  \n<calculator> {\"input\": \"1.337 * (1/1.337)\"}</calculator>"
  },
  "log": "<calculator> {\"input\": \"2 / (1 / 668.5)\"} </calculator>\n or  \n<calculator> {\"input\": \"1.337 * (1/1.337)\"}</calculator>"
}
[chain/end] [1:chain:AgentExecutor > 2:chain:RunnableAgent] [1.55s] Exiting Chain run with output: {
  "returnValues": {
    "output": "<calculator> {\"input\": \"2 / (1 / 668.5)\"} </calculator>\n or  \n<calculator> {\"input\": \"1.337 * (1/1.337)\"}</calculator>"
  },
  "log": "<calculator> {\"input\": \"2 / (1 / 668.5)\"} </calculator>\n or  \n<calculator> {\"input\": \"1.337 * (1/1.337)\"}</calculator>"
}
[chain/end] [1:chain:AgentExecutor] [1.56s] Exiting Chain run with output: {
  "output": "<calculator> {\"input\": \"2 / (1 / 668.5)\"} </calculator>\n or  \n<calculator> {\"input\": \"1.337 * (1/1.337)\"}</calculator>"
}
HenryHengZJ commented 2 months ago

I tested using npx flowise start with latest version and it works: image

hmtbRD commented 2 months ago

I got the same issue using a docker instance of flowise. + now it cant call functions.

louisfds commented 1 month ago

Same issue here, also using a docker instance.

jasonworkboost commented 2 weeks ago

same issue, seems to be related to the Tool Agent for me