cloudfoundry / diego-release

BOSH Release for Diego
Apache License 2.0
201 stars 212 forks source link

[BBS] Add request metrics for BBS endpoints #898

Open klapkov opened 8 months ago

klapkov commented 8 months ago

Add request metrics for BBS endpoints

Summary

Currently BBS does not emit much information about the performance of it's endpoints. What we emit currently is RequestsCount and RequestLatency ( in regard to BBS endpoints ). They cover all the endpoints. That is why we propose to introduce a more detailed look under the hood of the BBS server. We can achieve this with the help of a module already used in the rep and locket. https://github.com/cloudfoundry/locket/blob/main/metrics/helpers/request_metrics.go

With this helper we have a lot of more info on the performance per endpoint. It can be implemented on a handler level and emit new metrics once per minute ( the default report interval ). It gives us these metrics :

Now the tricky question is, which endpoints implement it. Here are most of the BBS endpoints:

//desiredLRP endpoints
"DesiredLRPSchedulingInfos", "DesiredLRPRoutingInfos", "DesiredLRPByProcessGuid", "DesiredLRPs",

//desiredLRP lifecycle endpoints
"UpdateDesireLRP", "RemoveDesiredLRP", "DesireDesiredLRP",

//actualLRP endpoints
"ActualLRPs", 

// actualLRP lifecycle endpoints
"ClaimActualLRP", "StartActualLRP", "CrashActualLRP", "FailActualLRP", "RemoveActualLRP", "RetireActualLRP",

// evacuation endpoints
"RemoveEvacuatingActualLRP", "EvacuateClaimedActualLRP", "EvacuateCrashedActualLRP", "EvacuateStoppedActualLRP", "EvacuateRunningActualLRP",

// task endpoints
"Tasks", "TaskByGuid", "DesireTask", "StartTask", "CancelTask", "RejectTask", "CompleteTask", "ResolvingTask", "DeleteTask",

Let's say we implement the helper with every one of these endpoints, which would give us perfect visibility on all operations of the BBS server. Here we have 28 endpoints. Multiplied by 6 = 168 new metrics. That is a lot.

If we do not want to introduce this many new metrics, we can try to divide them into groups. Those groups can be for example:

In this case we have 36 new metrics. With this approach , we do not get quite as much information, but at least we know how a certain operation group performs. The above groups are only an example. If we go with this path, we should decide how to split these groups.

Maybe we can even make the endpoints which implement the helper configurable, so that everyone can use what best suits them. Nevertheless, I think this topic is worth a discussion. I will come back sort of a PoC in the next days.

Diego repo

https://github.com/cloudfoundry/bbs

geofffranks commented 6 months ago

I'm in favor of adding better metrics for more visibility, but also want to keep the new metric count lower. Would your goals be solved by keeping the current two metrics, but expanding them out into either groups of endpoints? I'm just wondering if having the started vs success/failed/inflight/canceled request counts would be worth the additional metrics over just a raw request count.

For grouping - maybe it makes sense to do a small-scale profile of endpoints with metrics on each endpoint, and then we think about grouping less-frequent/less-impactful calls, but keep the endpoints most used and most susceptible to performance issues isolated.

@klapkov can you add an item to the next WG meeting's agenda to get broader opinions on this?

MarcPaquette commented 5 months ago

Hi @geofffranks @klapkov , Did the discussions for this issue occur during the WG Meeting? What's the status of this PR (and https://github.com/cloudfoundry/bbs/pull/80 too)

klapkov commented 5 months ago

Hello @MarcPaquette,

I was not able to join last month's WG meeting, but we will include the topic in the agenda for the upcoming one next week. We will discuss the topic there and hopefully get some input from everyone.

geofffranks commented 3 months ago

@klapkov did this ever get discussed in the WG?

klapkov commented 3 months ago

@geofffranks Sadly no, this is an important topic for us, but we had priority switch within the team and it seems like for now, we won't invest time in the topic. But for sure at some point we will return to this.