Closed Meriem-BM closed 3 weeks ago
@Meriem-BM Thank you for your report - we will handle optimising the route.
FYI - we are actually using Caddy as reverse proxy.
I enabled support for server sent events on the requested route.
Now I can connect with postman and stay connected.
However - we should still fine-tune it with timeout values and other connection properties as needed.
Do you have any suggestions for timeout values - @Meriem-BM ?
Thanks for doing the investigation @Meriem-BM I can connect too after Kays modifications, can you confirm it's working for you as well??
I enabled support for server sent events on the requested route.
Now I can connect with postman and stay connected.
However - we should still fine-tune it with timeout values and other connection properties as needed.
Do you have any suggestions for timeout values - @Meriem-BM ?
@Meriem-BM Check this link for the reference for what Kay was suggesting: https://caddyserver.com/docs/caddyfile/directives/reverse_proxy#flush_interval
Thanks for doing the investigation @Meriem-BM I can connect too after Kays modifications, can you confirm it's working for you as well??
Yeah, working on postman, but not from Frontend will check further why
I enabled support for server sent events on the requested route.
Now I can connect with postman and stay connected.
However - we should still fine-tune it with timeout values and other connection properties as needed.
Do you have any suggestions for timeout values - @Meriem-BM ?
Hey @geleeroyale can you pass the modifs you added to the config please.
@Meriem-BM Here you go:
handle /events {
# Reverse proxy configuration for SSE
reverse_proxy impact-graph:4000 {
flush_interval -1
header_down X-Accel-Buffering "no"
transport http {
compression off
}
}
}
@Meriem-BM Here you go:
handle /events { # Reverse proxy configuration for SSE reverse_proxy impact-graph:4000 { flush_interval -1 header_down X-Accel-Buffering "no" transport http { compression off } } }
@mhmdksh could you please add these headers too
header_down Access-Control-Allow-Origin "*"
header_down Content-Type "text/event-stream"
header_down Transfer-Encoding "chunked"
Just to see if it will work on the app, as postman bypasses CORS, the app enforce them, so maybe that's why it's not working on the app.
@Meriem-BM This is the configuration now:
handle /events {
# Reverse proxy configuration for SSE
reverse_proxy impact-graph:4000 {
flush_interval -1
header_down X-Accel-Buffering "no"
header_down Access-Control-Allow-Origin "*"
header_down Content-Type "text/event-stream"
header_down Transfer-Encoding "chunked"
transport http {
compression off
}
}
}
Let me know if it works for you, if not let's schedule a hacking session
@Meriem-BM This is the configuration now:
handle /events { # Reverse proxy configuration for SSE reverse_proxy impact-graph:4000 { flush_interval -1 header_down X-Accel-Buffering "no" header_down Access-Control-Allow-Origin "*" header_down Content-Type "text/event-stream" header_down Transfer-Encoding "chunked" transport http { compression off } } }
Let me know if it works for you, if not let's schedule a hacking session
Yeah, still didn't work on the app, let me know when, thanks
@Meriem-BM I noticed there is an open issue in the Caddy repo for reverse proxied SSE events.
Check the details here: https://github.com/caddyserver/caddy/issues/6293
Does this ring a bell to our deployed approach?
I tried the below curl directly on impact-graph deployment (WITHOUT the caddy proxy)
curl 'http://localhost:4000/events' \
-H 'accept: text/event-stream' \
-H 'accept-language: en-US,en;q=0.7' \
-H 'cache-control: no-cache' \
-H 'origin: http://localhost' \
-H 'priority: u=1, i' \
-H 'sec-fetch-dest: empty' \
-H 'sec-fetch-mode: cors' \
-H 'sec-fetch-site: same-site' \
-H 'sec-gpc: 1' \
-H 'x-accel-buffering: no' \
-H 'connection: keep-alive'
But it was still hanging, which leads me to think that the cors config in the reverse proxy are ok, but we still want to figure out how we call this /events
endpoint so it doesn't hang
@geleeroyale @Meriem-BM I noticed that there was a problem with how the CORS are being called in the Caddy reverse proxy, I was able to conclude that when I disabled the CORS all-together. The issue was that we had 1 CORS config that was being called globally on all the endpoints pointing to our backend services.
So to solve the issue, I have defined the below CORS config, to be called only for our SSE endpoint /events
:
(cors-sse) {
header {
Access-Control-Allow-Origin "*"
Access-Control-Allow-Credentials true
Access-Control-Expose-Headers "*"
Vary Origin
defer
}
}
And then called it on this specific route as you see below:
handle /events {
# Reverse proxy configuration for SSE
import cors-sse
reverse_proxy impact-graph:4000 {
flush_interval -1
transport http {
compression off
versions 1.1
}
header_down Content-Type "text/event-stream"
header_down X-Accel-Buffering "no"
}
}
And it's important to note that instead of calling the cors globally as you see below:
{$IMPACT_GRAPH_URL} {
# Call the cors for global configs (All routes)
import cors
# Handling Restricted Paths Routes
route @restrictedPaths {
import cors
respond @publicIPAccess 403
reverse_proxy impact-graph:4000 {
transport http {
response_header_timeout 300s
dial_timeout 300s
}
}
}
}
We now import it as below:
{$IMPACT_GRAPH_URL} {
# Call the cors for whitelisted domains
# import cors (Removed from global config)
# Handling Restricted Paths Routes
route @restrictedPaths {
import cors # Set to specific route config
respond @publicIPAccess 403
reverse_proxy impact-graph:4000 {
transport http {
response_header_timeout 300s
dial_timeout 300s
}
}
}
}
I will update our setup in the giveth-all repo
@geleeroyale This config has been updated in giveth-all setup with this push https://github.com/Giveth/giveth-all/commit/9a1fd594c50349e0ffc6bc7685a37d6ef5b588c2
GM Can we put it in done, or does it need a review and affect the Stellar release?
I'm not aware of details of this issue, should it be tested by QA ? @Meriem-BM
@maryjaf I think it was tested already by @Meriem-BM. This can be closed (It's not directly related to the stellar release, but it is a prerequisite infra config)
We are currently facing issues with Server-Sent Events (SSE) in our staging Backend, particularly when accessed through our Nginx reverse proxy. The SSE connections are either being blocked or pending for lifetime, leading to not establishing the connection to the event.
After investigating and trying with some tunnels (that exposes my Backend) which doesn't have a reverse proxy, it worked, I came to a conclusion that the the reverse proxy might be the reason.
To test this behaviour you can send a req here https://impact-graph.serve.giveth.io/events using postman (you will see that it's pending)
Here is how it should be normally when successful
I think the issue has to do with buffering that it's enabled by default on Nginx, which we should disable for this rout
/events