langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
91.35k stars 14.53k forks source link

Timeout Error OpenAI #3512

Closed shreyabhadwal closed 3 months ago

shreyabhadwal commented 1 year ago

I am facing a Warning similar to the one described here #3005

WARNING:langchain.embeddings.openai:Retrying langchain.embeddings.openai.embed_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=600). It just keeps retrying. How do I get around this?

dnrico1 commented 1 year ago

Same for me as well

La1c commented 1 year ago

Getting the same error, with map-reduce summarizing chain. Vanilla open ai api works as expected.

gabacode commented 1 year ago

Same, following 👀

shreyabhadwal commented 1 year ago

@dnrico1 @La1c @gabacode When are y'all getting the error? For instance, I am getting it through my websocket app deployed on Azure (it's a chatbot application). Weirdly enough, I don't face it when I run the application locally.

bkamapantula commented 1 year ago

+1

OpenAI chat endpoint always seems to time out when using the summarization chain.

It works with the anthropic endpoint though.

Binb1 commented 1 year ago

+1

@shreyabhadwal Experiencing the exact same behaviour. Local works well but it timeouts on Azure.

shreyabhadwal commented 1 year ago

@Binb1 do the timeouts happen every time for you or occasionally? Also, are you using websockets or SSE?

Binb1 commented 1 year ago

@shreyabhadwal Strangely enough, every time I deploy a new version of my app it seems to work well. But after a few minutes I get timeouts and I can't really understand why so far. I'm using SSE. I've tested a lot of different options and I have the same problem doing an Openai python SDK call or Langchain.

shreyabhadwal commented 1 year ago

@Binb1 I experience the exact same behavior. It works well if I restart the app, and then after a few minutes when I try again I get timeouts. Very weird.

Interestingly, I have tried doing it without streaming and it seems to be working well. I don't quite understand it.

Binb1 commented 1 year ago

@shreyabhadwal This makes me think that it is more Azure than langchain/openai related then 😕

I have not tried streaming yet as I don't really need it but it fails for me even without it. So strange.

It feels like the webapp needs a "warmup" before being able to make the calls.

gabacode commented 1 year ago

Increasing the timeout fixes it for me! Thanks @timothyasp !

firezym commented 1 year ago

+1 I set the timeout to 300s, but each time after 3 to 5 requests, It still fails as timeout...

timothyasp commented 1 year ago

openai requests can go as long as 600s, and if you're doing large token prompts with gpt-4, 300s might be too low. So i'd set it at 600s and hope for the best. But i have noticed latencies on OpenAI's end being a lot higher over the last week or two.

-Tim

On Fri, May 5, 2023 at 11:29 AM firezym @.***> wrote:

+1 I set the timeout to 300s, but each time after 3 to 5 requests, It still fails as timeout...

— Reply to this email directly, view it on GitHub https://github.com/hwchase17/langchain/issues/3512#issuecomment-1536622830, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFMY47CRDATG6DVC7KAY5TXEVBGXANCNFSM6AAAAAAXKZOUOA . You are receiving this because you were mentioned.Message ID: @.***>

ColinTitahi commented 1 year ago

@shreyabhadwal @Binb1 any luck with Azure?

Same issue local fine and fast, on Azure issues. Something seems to fall asleep after 4-10 minutes For me "Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry" seems to get called before the call for chat etc. After it times out it returns and is good until idle for 4-10 minutes So Increasing the timeout just increases the wait until it does timeout and calls again.

Driving me nuts and suspect there is a simple configuration I'm missing.

shreyabhadwal commented 1 year ago

Nope nothing yet @Binb1 @ColinTitahi are y'all using async calls to OpenAI?

ColinTitahi commented 1 year ago

@shreyabhadwal Not explicitly so I don't think so. I'm using generate on the ChatOpenAI so I can get the llm_output token etc and another run call to a chat-conversational-react-description agent with some additional tools. These endpoints in my flask app are being called from the client JavaScript which uses async to wait for the response. It's like something gets set up when the flask app initially starts up and then falls asleep or disconnects or something after say 4-5 minutes and then has to wait for the timeout to occur to reconnect when the user calls it. Hence upping the timeout just increases that initial wait.

I'm using the OpenAI Chat model and hosting on an Azure web service.

sagardspeed2 commented 1 year ago

I am getting same error with model gpt-4-0314 and max_token = 2048 with request_timeout = 240 in local and live server. yesterday this was working fine

DennisSchwartz commented 1 year ago

Same issue here. Running it in a Kubernetes Pod deployed to an AWS cluster and using async calls. Works perfectly locally but times out as soon as it's in the cluster.

Weirdly, calling the OpenAI LLM directly works, but running the Agent it gets stuck.

This works:

agent_executor = get_agent(user_token)
driver = agent_executor.agent.llm_chain.llm
cl = driver.client()
print(cl.create(model=driver.model_name, prompt='Tell me a poem'))

But this does not:

await agent_executor.arun(query)
DennisSchwartz commented 1 year ago

Ok so from the comments above I realised I was testing async in one case and blocking in the other.

print(await cl.acreate(model=driver.model_name, prompt='Tell me a poem'))

Does indeed also time out and fail to run! So there definitely seems to be an issue with the Async running of OpenAI. I'm going to try Anthropic for now. :)


UPDATE

I still can't make it run, neither for OpenAI nor Anthropic - but I think I know what's going on.

Our Kubernetes cluster running the application is blocking access to the internet using Squid Proxy. The OpenAI API is allowed, but only for HTTP requests. I think the OpenAI client is probably using web sockets to stream the responses and this is blocked by our proxy/firewall. Have resorted to using the sync application for now until we can figure out how to fix our proxy.

jpsmartbots commented 1 year ago

I have the same issue. I am trying to hit completion api on text-davinci-003 engine. I am unable to replicate the issue on my local as it works always. When I containerize and deploy it in AWS Lambda, I get the following error sometimes (dont know when). Request timed out: HTTPSConnectionPool(host='instanceid.openai.azure.com', port=443): Max retries exceeded with url: //openai/deployments/textdavinci003/completions?api-version=2022-12-01 (Caused by ConnectTimeoutError(, 'Connection to instanceid.openai.azure.com timed out. (connect timeout=5)')

Any resoultion?

maxmarkov commented 1 year ago

It could be a problem with the SSL key. Set it up as a system environment variable. os.environ["REQUESTS_CA_BUNDLE"] = "PATH_TO_YOUR_CERTIFICATE/YOUR_CERTIFICATE.crt"

bigrig2212 commented 1 year ago

Same issue here. Works for a bit and then starts timing out. I just can't nail down when it happens and why. There doesn't seem to be a rhyme or reason. Seems to happen a lot more on production (gcp) than locally. Although it happens on both. Seems to happen with short sentences more than long ones. Although not exclusively. It happens a LOT though. Like 1 out of 4 requests.

flake9 commented 1 year ago

+1

HaochenQ commented 12 months ago

I have the same issue. Works well locally but faces timeout issues when the app is deployed to Azure App Service for Linux Python or Custom Container.

jpsmartbots commented 12 months ago

Hi HaochenQ

May be deploying your solution in a virtual machine might solve your problem. When I moved from AWS Lambda to EC2, the problem got resolved

HaochenQ commented 12 months ago

Hi HaochenQ

May be deploying your solution in a virtual machine might solve your problem. When I moved from AWS Lambda to EC2, the problem got resolved

Thank you@jpsmartbots, I tried to deploy my container with an Azure VM, but the issue persists.

For those of you who are facing 504 gateway timeout issues Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 4.0 seconds as it raised Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=600). with Azure App Services, the issue is because the default HTTP timeout of Azure App Service 230/240 seconds while the default timeout of OpenAI APIs is 600 seconds. Before langchian hear back from OpenAI and do a retry, Azure returns an error and our app appears down. You can use request_timeout - OpenAIEmbeddings(request_timeout=30) to avoid time timeout from Azure side and somehow the retry call to OpenAI from langchain can always work.

Not sure why the langchian call to the OpenAI after a period of inactivity will fail and cause a timeout.

ShantanuNair commented 11 months ago

Hey all, I believe this being fixed in the openai-python client should also help with this issue, and with generations:

https://github.com/openai/openai-python/pull/387

The async and sync request_timeouts are NOT identical.

luoqingming110 commented 9 months ago

same problem

ryoung562 commented 9 months ago

I'm running into the same issue. i am running a proxy container that talks to openai API works locally, but not when i deploy it to railway.

mallapraveen commented 7 months ago

did anyone fix this, running into the same issue when I use summarize map reduce chain from Langhian on AWS lambda?

lbaiao commented 2 months ago

It's 2024 and I'm facing the same issue.

Using Langchain in a Flask App, hosted in an Azure Web App. Calling Anthropic Claude3 Haiku model in AWS Bedrock.

Langchain request takes about 2 minutes to return. The following ones return smoothly. After about 7 idle minutes, first request takes too long again.

Can't reproduce this issue locally. It only happens in Azure environment.

When testing with boto3 AWS python SDK, the requests return fast every time, with no issues.