maestro-server / discovery-api

:flashlight: Service discovery API
https://maestro-server.github.io/discovery-api/
GNU General Public License v3.0
4 stars 1 forks source link

Request to import cost information #3

Open rayba opened 4 years ago

rayba commented 4 years ago

I've been evaluating multi-cloud management tools and this is pretty close to what is needed. I'd like to see you import cost reports from each cloud and correlate that with the information you already import. With some guidance on the best way to incorporate maybe I or another developer interested in this project could do as a PR. Thanks!

Signorini commented 4 years ago

@rayba Thanks for reaching me out.

In my mind, to track cost can be done in two ways.

1 - The first one, It makes a simple crawler adding these data on db, and then the system gets and do maths.

BTW: This was my first approach. I already added the aws and azure costs into instance flavour as you can check here:

         "Linux On Demand cost": "$4.256000 hourly",
     "Linux Reserved cost": "$2.688000 hourly",
     "RHEL On Demand cost": "$4.386 hourly",
     "RHEL Reserved cost": "$2.818 hourly",
     "SLES On Demand cost": "$4.356 hourly",
     "SLES Reserved cost": "$2.721 hourly",
     "Windows On Demand cost": "$7.200000 hourly",
     "Windows Reserved cost": "$5.632000 hourly",
     "Windows SQL Web On Demand cost": "$8.327 hourly",
     "Windows SQL Web Reserved cost": "$6.759 hourly",
     "Windows SQL Std On Demand cost": "$14.88 hourly",
     "Windows SQL Std Reserved cost": "$13.312 hourly",
     "Windows SQL Ent On Demand cost": "$31.2 hourly",
     "Windows SQL Ent Reserved cost": "$29.632 hourly",
     "Linux SQL Web On Demand cost": "$5.3376 hourly",
     "Linux SQL Web Reserved cost": "$3.7696 hourly",
     "Linux SQL Std On Demand cost": "$11.936 hourly",
     "Linux SQL Std Reserved cost": "$10.368 hourly",
     "Linux SQL Ent On Demand cost": "$28.256 hourly",
     "Linux SQL Ent Reserved cost": "$26.688 hourly",

https://github.com/maestro-server/server-app/blob/master/migrations/0055-aws_flavor.js https://github.com/maestro-server/server-app/blob/master/migrations/0085-azure_flavor.js

These were migrated when you start the server API service.

The most cons to choose this method, it is hard to track so many variants, some hosts are turn on 24/7, others not, the vendor has so many roles like instances on-demand, spot or dedicated instances, time running, differents regions, many many vars.

2 - The second approach, It uses the cost explorer api and get that information to the system, is a better solution and more robust; We can control by hours consumed, can use tags to allocate on applications or system, doesn't need to do so many maths or control so many variants.

rayba commented 4 years ago

The first is useful for sure but the second is what I would require to get adoption in a large enterprise setting. I'll take a crack at it.

rayba commented 4 years ago

Looking at the requirements for importing cost (from AWS) I think the best approach is similar to how existing products do it (example: https://docs.flexera.com/Optima/Content/helplibrary/costusagerepaws.htm#optimacloudproviderbilling_1180771343_1149281) - this involves enabling the AWS billing report with resource ids. That saves to S3 as a zip file with enclosed CSV file. That CSV file has various fields including "ResourceId".

I think we could create a new mongodb catalog for cost and where the resource lines up with an existing specifically managed object (e.g. EC2 instance/server) we show cost information, otherwise maybe a cost/other tab in the UI? The benefit of importing cost in this way is that we get all the other resources which are not specifically imported and would augment the information we have about specifically imported objects. Analytics would become a lot more interesting as well.

As this does not fit into the typical API based discovery under discovery/translate (at least for AWS) I could use some guidance on where to put this feature. Thanks!

Signorini commented 4 years ago

humm,

Particularly I think is hard to handle csv file, consistency and synchronism can be harsh to achievement. My first approach it's to use an API call to fill the new mongodb catalogue, this approach fills well on discovery/translate system. The translated system will ensure to have a consistency data structure, which means I can plugin other providers such as ibm, azure in a single datastructure.

If you go with a file approach or an easy way to do the task, my recommendations are, make a new service and expose throw api, this service will handle with all your costs rules, and then plugin this new service to server-maestro. It will provide all authentication and rest/hateoas system, you don't need to create cron/retry tasks because scheduler-maestro can handle it.

In a generic explanation, all back end service such as discovery, reports, analytics has an api exposed using flask and workers using celery/rabbitmq.

You don't need to use python, but the standard are to expose internal services using rest, have a api docs, and use the same vars names as _id, owner_id, and etc. EX: http://docs.maestroserver.io/en/latest/developer/api/index.html

Regarding front-end, yes, make sense to create a new tab in the UI.