getsentry / relay

Sentry event forwarding and ingestion service.
https://docs.sentry.io/product/relay/
Other
321 stars 91 forks source link

[FR] Add a "light proxy mode" with an optimized memory profile. #3021

Open elonzh opened 9 months ago

elonzh commented 9 months ago

In some cases users might want to deploy a Relay instance in "light proxy mode" with an optimized memory profile.

Reference: Memory usage too high #3012

jjbayer commented 9 months ago

Thanks!

jjbayer commented 9 months ago

Idea: Separate executable for proxy / static mode.

steffen25 commented 9 months ago

Hi @jjbayer

I came across this issue 2 days ago https://github.com/getsentry/relay/issues/3012 Im also interested in light proxy mode with a optimised memory profile. I run Relay on multiple small instances with limited amount of memory.

I saw your comment here https://github.com/getsentry/relay/issues/3012#issuecomment-1916634451 about where the memory is used internally. I see that you mentioned metrics and outcome. Are these services required for Relay to work as a "light proxy" that "just" forwards requests to the upstream? If not, would it make sense to enable/disable them by a feature flag like processing does it? Maybe you could pin point me into the right direction. I just came across the Relay service a couple of weeks ago so pretty new to the codebase though I have used Sentry for years now. I really appreciate all the open source work you guys do for the community.

jjbayer commented 8 months ago

@steffen25 thanks for reaching out!

I see that you mentioned metrics and outcome. Are these services required for Relay to work as a "light proxy" that "just" forwards requests to the upstream?

We're currently working on a fix for metrics in proxy mode to forwards metrics without buffering. I'm not sure if that bug explains the high memory usage seen by @elonzh to be honest though.

would it make sense to enable/disable them by a feature flag like processing does it?

Possibly. Before making any changes I would investigate what Relay actually spends memory on in proxy mode with something like jemalloc heap profiling.

Contributions are always welcome!

steffen25 commented 8 months ago

I see that the https://github.com/getsentry/relay/pull/3106/ has been merged to master. I was very curious to see if that would improve the memory consumption so I cloned the repo down and compiled the linux binary and deployed it to my test server. For reference im running Relay as a systemd service. I then went to my website with a React frontend and Laravel backend. Im using a trace rate of 1.0 I clicked a few links to generate some load on the backend.

Then I check the status of the Relay service:

After a restart:

● sentry-relay.service - Sentry Relay
     Loaded: loaded (/etc/systemd/system/sentry-relay.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2024-02-21 12:36:22 CET; 2s ago
   Main PID: 2740691 (sentry-relay)
      Tasks: 23 (limit: 9237)
     Memory: 7.7M
        CPU: 23ms
     CGroup: /system.slice/sentry-relay.service
             └─2740691 /usr/local/bin/sentry-relay run --config /etc/sentry-relay

After a few requests has passed through:

● sentry-relay.service - Sentry Relay
     Loaded: loaded (/etc/systemd/system/sentry-relay.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2024-02-21 12:36:22 CET; 47s ago
   Main PID: 2740691 (sentry-relay)
      Tasks: 27 (limit: 9237)
     Memory: 617.3M
        CPU: 772ms
     CGroup: /system.slice/sentry-relay.service
             └─2740691 /usr/local/bin/sentry-relay run --config /etc/sentry-relay

Config:

# Please see the relevant documentation.
# Performance tuning: https://docs.sentry.io/product/relay/operating-guidelines/
# All config options: https://docs.sentry.io/product/relay/options/
relay:
 mode: proxy
 upstream: https://upstream.com # redacted
 host: 127.0.0.1
 port: 3000
Dav1dde commented 8 months ago

Unfortunately #3106 did not change anything in memory consumption, it just now correctly forwards metrics as well.