elastic / apm

Elastic Application Performance Monitoring - resources and general issue tracking for Elastic APM.
https://www.elastic.co/apm
Apache License 2.0
384 stars 114 forks source link

Spec that agents in Lambda should *not* do back-off #613

Closed trentm closed 2 years ago

trentm commented 2 years ago

This tweaks the Lambda and transport specs to say that APM agents in Lambda should not implement back-off for repeated failing intake requests.

Motivation

If a user configures a Lambda function with one of the agents, and sets up the envvars for the APM Lambda extension but does not include the extension layer in their lambda, then the APM agent will get errors attempting to send to the local extension. If the agent implements back-off on repeated intake request errors, then the agent will get into a state where it is delaying intake requests. This could interfere with its ?flushed=true signalling to the extension for each invocation completion.

This should be relatively minor, because (a) it is a configuration error and (b) if the extension is missing it obviously won't be sending on APM data anyway. However, at least with the Node.js APM agent it can lead to the user's Lambda function returning null instead of its actual response: https://github.com/elastic/apm-agent-nodejs/issues/2598 IOW, the broken APM agent is causing harm.

In general, if the extension is missing or is frequently erroring on its intake API endpoint, it isn't the responsibility of the APM agent to back-off. The point of APM agent back-off (if I understand correctly) is to avoid overloading APM server, especially when it is responding with "queue is full" -- i.e. backpressure. However, because the extension is asynchronously sending on APM data, the APM agent doesn't get the actual responses from APM server, so can't meaningfully handle backpressure. It is, or should be, the responsibility of the extension to handle buffering and back-off.

Checklist for a small enhancement

apmmachine commented 2 years ago

:green_heart: Build Succeeded

the below badges are clickable and redirect to their specific view in the CI or DOCS Pipeline View Test View Changes Artifacts preview preview

Expand to view the summary

#### Build stats * Start Time: 2022-03-21T05:58:00.840+0000 * Duration: 3 min 49 sec

felixbarny commented 2 years ago

I was originally worried that a backoff would imply that the flush blocks until the end of the backoff or until the flush times out. I now remember what the Java agent already does to avoid this situation. If there's a backoff, the flush call immediately returns. Also, there's a configurable timeout for how long a call to flush can block at max which defaults to 1s.

A downside of completely disabling the backoff is that if all attempts to connect to the extension fail (for example because the extension layer is not installed), each individual event will cause the agent to attempt creating a new connection. This will lead to more errors (thus, potentially more verbose logging) and overhead due to the repeated attempts to establish a connection.

Therefore, we may rather want to standardize the flush semantics (return immediately on backoff, configurable timeout).

As an additional benefit, this keeps the APM Server Sender logic within agents simple by not having two different backoff strategies for lambda/non-lambda. Having said that, looking at https://github.com/elastic/apm-nodejs-http-client/pull/180, it's not really complex.

trentm commented 2 years ago

A downside of completely disabling the backoff is that if all attempts to connect to the extension fail (for example because the extension layer is not installed), each individual event will cause the agent to attempt creating a new connection.

Fair. I haven't tried with the Node.js agent, but I think it'll do similar (with a 20ms bufferWindowTime which attempts to do some batching of a number of events close together).

this keeps the APM Server Sender logic within agents simple

It is true that the APM server client for the Node.js agent has a number of subtle tweaks to deal with the differences of the Lambda Runtime. I.e. currently this logic is not simple in the Node.js agent. (I don't know if that complexity is somewhat justified by the indirect use of the "beforeExit" event that effectively watches for an empty event loop to guess when the function invocation is "done".)

trentm commented 2 years ago

From the discussion above, and a little bit of chat on the apm-agent-nodejs call, I'll take a look at the Node.js APM agent again to see if it is reasonably easy to get it to be able to do back-off but not impact the user's Lambda Function responses.

Likely the end result is that I'll retract this PR -- Felix gave the argument for why back-off in the agent is still worthwhile -- and either the Node.js agent will figure out a way to safely do back-off in a Lambda environment, or it'll go off spec for this case.

felixbarny commented 2 years ago

@basepi Is the Python agent doing an exponential backoff when the Lambda extension returns errors/is not available? If yes, how is the Python agent handling flush requests while it's in a backoff/grace period? Does it short-circuit the flush while in backoff?

basepi commented 2 years ago

If we're backing off, the python agent will drop the data on an explicit flush and return immediately.

So we won't hang, we'll just drop the data and end.

Our backoff interval is not configurable.

estolfo commented 2 years ago

@trentm, @felixbarny, @basepi is there a conclusion this? The backoff implementation has been merged into the lambda extension and it was mentioned in this discussion that this PR might be retracted, as backoff in the agent is still useful.

Do we want to define an explicit behavior for when the agent is flushed and it's in a backoff/grace period?

felixbarny commented 2 years ago

I've created a proposal for specifying the behavior of flush during backoff: https://github.com/elastic/apm/pull/623

trentm commented 2 years ago

is there a conclusion this?

My status was still what https://github.com/elastic/apm/pull/613#issuecomment-1064280919 says: "Likely the end result is that I'll retract this PR". But otherwise this had moved to a low priority for me. I haven't read Felix's new #623 yet.

felixbarny commented 2 years ago

I don't think the priority has changed. I've just created #623 as an alternative to this spec that we can discuss at a later point in time.

trentm commented 2 years ago

Closing in favour of #623.