langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
88.76k stars 13.95k forks source link

Request Timeout / Taking too long #23060

Open lbaiao opened 2 weeks ago

lbaiao commented 2 weeks ago

Checked other resources

Example Code

chain = LLMChain( llm=self.bedrock.llm, prompt=self.prompt_template, ) chain_result = chain.predict(statement=text).strip()

Error Message and Stack Trace (if applicable)

No response

Description

I'm facing an issue similar to #3512 .

Using Langchain in a Flask App, hosted in an Azure Web App. Calling Anthropic Claude3 Haiku model in AWS Bedrock.

First Langchain request takes about 2 minutes to return. The following ones return smoothly. After about 7 idle minutes, first request takes too long again.

Can't reproduce this issue locally. It only happens in Azure environment.

When testing with boto3 AWS python SDK, the requests return fast every time, with no issues.

System Info

langchain==0.2.3 linux slim-bookworm python:3.12.3 container image: python:3.12.3-slim-bookworm

keenborder786 commented 2 weeks ago

If this is only happening in Azure Environment then the issue might not be langchain. please check your networking setting for Azure Web App and make sure that you don't have any firewall?

lbaiao commented 2 weeks ago

I'm gonna check the firewall. However, since the requests using boto3 are working fine, does it make sense to be a firewall issue? Both Langchain and boto3 are making requests to the same endpoints/services.