2i2c-org / infrastructure

Infrastructure for configuring and deploying our community JupyterHubs.
https://infrastructure.2i2c.org
BSD 3-Clause "New" or "Revised" License
103 stars 63 forks source link

[Spike] [max 6h] Decide on cost allocation strategy - Athena vs. Cost Explorer API #4648

Closed consideRatio closed 3 weeks ago

consideRatio commented 4 weeks ago

This task is blocking tasks towards attributing costs using Athena, as Yuvi learned about another approach to be evaluated first. This is described in https://github.com/2i2c-org/infrastructure/issues/4453#issuecomment-2301947867:

Regardless, I think it's early enough that we should investigate this alternative to Athena.

It would involve:

  1. https://docs.aws.amazon.com/cost-management/latest/userguide/ce-api.html as the source of data.
  2. An intermediate python web server, that talks to the Cost Explorer API
  3. https://grafana.com/grafana/plugins/yesoreyeram-infinity-datasource/ for connecting this from Grafana. This is recommended by grafana as the replacement for https://github.com/grafana/grafana-json-datasource

There are a few major advantages over using Athena:

  1. Much easier to validate, as we aren't writing complex SQL queries but translating what we can visually do in the cost explorer into API calls.
  2. Athena is not per AWS account but at the AWS organization level, so we would have needed an intermediate layer anyway for cases when we use the 2i2c AWS organization. We wouldn't have needed this for Openscapes, but trying to use it for any of our other AWS accounts would've required an intermediate python layer for access control (so different communities can't see ach other's data).

So if possible, we should prefer this method.

We can resuse all the work we had done, except for some parts of #4546.

Next step here is to design a spike to validate this (instead of #4544). The athena specific issues that are subtasks of this can be closed if we are going to take this approach.

Practical spike steps

I think this has to be updated continuously as part of the spike, but the goal is to clarify and verify that its reasonable to move towards using the Cost Explorer API.

Definition of done

Potential followup work not part of spike

yuvipanda commented 4 weeks ago

The definition of done looks good to me, @consideRatio.

If we go for Cost Explorer API, work to define/refine further tasks to be worked is needed.

If this isn't part of the spike, once the spike is done can you create another issue to track this? Thanks.

consideRatio commented 4 weeks ago

Picking it up now with some initial reading at the end of my day, to be continued tomorrow.

consideRatio commented 3 weeks ago

Notes to sketch a future possible implementation

consideRatio commented 3 weeks ago

Conclusion - moving forward with Cost Explorer API

I've arrived at what I consider sufficient grounds for a Decision to move ahead with Cost Explorer API.

It seems technicallt very viable, and the mhe motivation by Yuvi for using Cost Explorer API over Athena is sufficient in my mind.

There are a few major advantages over using Athena:

  1. Much easier to validate, as we aren't writing complex SQL queries but translating what we can visually do in the cost explorer into API calls.
  2. Athena is not per AWS account but at the AWS organization level, so we would have needed an intermediate layer anyway for cases when we use the 2i2c AWS organization. We wouldn't have needed this for Openscapes, but trying to use it for any of our other AWS accounts would've required an intermediate python layer for access control (so different communities can't see ach other's data).

Another positive conclusion is that it seems that we can avoid needing much complexity within the Python intermediary, and can put that complexity in the Grafana queries instead. This is because the infinity plugins seem to allow for notable post-processing of the JSON responses. Due to this, we can probably more responsively and quickly iterate on the cost dashboards and improve them, letting the Python intermediary be a quite slimmed project with relatively low complexity, making it more viable for re-use by others as well.

yuvipanda commented 3 weeks ago

Another positive conclusion is that it seems that we can avoid needing much complexity within the Python intermediary, and can put that complexity in the Grafana queries instead.

Given that we'll be working on https://2i2c.productboard.com/roadmap/7803626-product-delivery-flow/features/27195081 in the future, as well as possibly needing to extend this work onto GCP, and the recommendations in https://docs.aws.amazon.com/cost-management/latest/userguide/ce-api-best-practices.html#ce-api-best-practices-optimize-costs, I'd like most of the complexity to actually be in the python layer, and not in the grafana layer. Fixing issues in Python code is also far more accessible to more team members and other open source contributors than fixing it in jsonnet + the filtering languages that the grafana plugin uses. So let's use the grafana plugin as primarily a visual display layer, and keep most of the complexity in the python code.