Open artur-bunko-rnm opened 2 months ago
Hi @artur-bunko-rnm 👋 , thanks for reaching out. Can you clarify whether your Amplify app is a static app or a Server Side Rendered (SSR) app? If it's an SSR application, I recommend reviewing the hosting compute logs that Amplify delivers to CloudWatch for the compute runtime. These logs can be found in the log group named /aws/amplify/{app-id}
. The logs should highlight any errors that may have occurred during runtime.
Hi @Jay2113, It is a NextJS application with SSR. I reviewed the logs but did not find anything related to this issue. We suspect the problem lies with the AWS CloudFront servers, as we were able to access the application from Europe, but our customer in Africa was unable to do so. Additionally, the 504 error appeared to be originating from the CloudFront side about too many request to the server.
@artur-bunko-rnm thanks for the clarification. Are you continuing to observe the 504 errors for requests originating from Europe? If so, can you share your Amplify app id and a few requests IDs from the compute runtime logs?
Additionally, the too many requests to the server
error typically indicates a 429 status code. Can you confirm whether the requests are resulting in a 504 or 429 status code?
@Jay2113 This is an error that appeared in Africa. It is more about the servers being down, sorry for misleading you. The problem is that we don't have access to CloudFront which Amplify produces
@artur-bunko-rnm Can you share your Amplify app id? Was this an intermittent issue, or is it a consistent problem that your end-users in Africa are experiencing?
@Jay2113 app ID is dcferx1jatcmy. It is an intermittent issue. They also indicate an issue with the internet in their region. Maybe all problems are related
Thanks. It's possible that the root cause of the issue could be related to network connectivity problems. I noticed that a couple of custom domains are connected to the app. Can you confirm which specific domain or branch is experiencing the issue?
stage branch has an issue, now it our main branch for production
@artur-bunko-rnm Can you confirm if the end users of your website in Africa are still intermittently experiencing 504 errors?
Hello @Jay2113, no errors, now all works as expected We only want to know how we can in future avoid or prevent this error
Facing the same issue on our amplify app d2lgyhgye6wqxb with dev branch. It happens on the components where mondodb might be taking lot of time. Interestingly all request where this is throwing 504 are waiting ~28 seconds so seems like we need cache but given we are in development stage hence no caching.
Its Nextjs app with graphql and mongodb.
Error 504 is coming in the network tab as well as console tab of the browser.
Can you please advise how can I solve this issue. @Jay2113
@artur-bunko-rnm @waqasjamal Thanks for your continued patience as we investigated the root cause of the 504 errors. We have identified that the issue was due to compute requests timing out after 30 seconds. We are tracking the configuration of the timeout on the compute server as a feature request here: https://github.com/aws-amplify/amplify-hosting/issues/3223.
To mitigate this problem, we recommend reviewing any potential slow APIs on the backend and optimizing them for improved performance. Thanks!
@Jay2113 I am also getting 504 errors in my amplify app for client-side requests to my EC2 instance that take more than 55 second.
I use Amplify rewrites to route these requests.
In my EC2 instance I have configured nginx to ignore client abort, so I can see the request completes. I believe the 504 is on the Amplify rewrite proxy server, however I don't have much ability to access logs to confirm or to configure a timeout there.
Environment information
Description
Hello, We've received a report from one of our clients that they're experiencing a 504 error when trying to access our site, specifically related to CloudFront. However, when we try to access the site ourselves, everything appears to be working correctly. Upon reviewing our metrics, we've noticed that there are some 500x errors being reported, but we're unable to find any corresponding errors in CloudWatch. We're trying to determine the cause of these errors and prevent them from happening in the future. Can you please help us investigate this issue and provide guidance on how to resolve it?