FinOps-Open-Cost-and-Usage-Spec / FOCUS_Spec

The Unifying Specification for Cloud Billing Data
https://focus.finops.org
Other
168 stars 37 forks source link

[FEEDBACK]: Do we have standard dataset format to be used whatever the CSP and load it in my single PowerBI reporting ? #325

Open PE-Nuiry opened 7 months ago

PE-Nuiry commented 7 months ago

Proposed Change

Having standard datasets is a must to minimize the import effort whatever the CSP

What FinOps use cases cannot be performed without the proposed change?

Having a standard dataset such as ; Detailed Invoice and having the dataset with a predefined sequence of data elements that you perfectly delivered with the V1.0 will enable me a standard import process of the collected datasets from Azure, AWS, Google etc it could be interesting to have standard dataset for

Which FinOps personas perform this use case?

FinOps person who provide visibility within the company via a reporting tool such as PowerBI

For which providers do you perform this use case for?

Azure, AWS nowadays

Criticality Scale

2: Important for adoption in the next 3-6 months

Context / Supporting information

Let's try to compare a bit FinOps standardization with EDI or RosettaNet. EDIFACT provides standard transactions: ORDERS; INVOICE; etc, RosettaNet provides PIPs 3A4 3A5 3C1 all are transactions standardized with a sequence of data elements and those standards enable my company to send or receive those transactions the same way whatever the customer or the vendor we are dealing with why not doing the same thing with FinOps, else the deployment of FOCUS will be probably very limited in my company

thecloudman commented 7 months ago

@PE-Nuiry Im testing this with my data from google and azure over the next few weeks to do the exact same thing you are asking. Keen to know if you have tried this using the BQ view from google for FOCUS columns and then exporting from that view to csv, then using the Finops hub that @flanakin created to import that csv to your ingestion container? This is what i am going to be trying. My other path is to export from BQ to parquet and move the parquet to the ingestion container directly, then use PBI, modify the PBI queries slightly to import the additional data from the new directory in ingestion. My view is, as the columns should all be the same, bringing in data from different provider to PBI should be possible in the one dataset and you could then use publishername to distinguish between them

PE-Nuiry commented 7 months ago

HI Graham

I will probably do a POC in 2024 but now yet plan. I do not use Google, only Azure mainly and a bit of AWS I’ll be pleased to understand from you tour return of experience

I was EDI and RosettaNet expert in my company for 15 years so I ‘m convinced that FinOps should provide thru is FOCUS program some standard datasets, else adoption could be low Company like mine develops less and less, we will not acquire a FinOps package due to their costs and the fact we have very good powerBI that provide visibility on cost and what to improve now, so we are more than waiting a standard for collection costs from our CSP, and we will have a single powerBI for multiple IaaS costs Best regards Pierre-Emmanuel NUiry

ST Restricted

From: Graham @.> Sent: Friday, February 9, 2024 1:04 AM To: FinOps-Open-Cost-and-Usage-Spec/FOCUS_Spec @.> Cc: Pierre-Emmanuel NUIRY @.>; Mention @.> Subject: Re: [FinOps-Open-Cost-and-Usage-Spec/FOCUS_Spec] [FEEDBACK]: Do we have standard dataset format to be used whatever the CSP and load it in my single PowerBI reporting ? (Issue #325)

@PE-Nuiryhttps://github.com/PE-Nuiry Im testing this with my data from google and azure over the next few weeks to do the exact same thing you are asking. Keen to know if you have tried this using the BQ view from google for FOCUS columns and then exporting from that view to csv, then using the Finops hub that @flanakinhttps://github.com/flanakin created to import that csv to your ingestion container? This is what i am going to be trying. My other path is to export from BQ to parquet and move the parquet to the ingestion container directly, then use PBI, modify the PBI queries slightly to import the additional data from the new directory in ingestion. My view is, as the columns should all be the same, bringing in data from different provider to PBI should be possible in the one dataset and you could then use publishername to distinguish between them

— Reply to this email directly, view it on GitHubhttps://github.com/FinOps-Open-Cost-and-Usage-Spec/FOCUS_Spec/issues/325#issuecomment-1935119507, or unsubscribehttps://github.com/notifications/unsubscribe-auth/BF6FQCQIRKTUN7RJJ75LK6TYSVRWHAVCNFSM6AAAAABC4CGNYWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMZVGEYTSNJQG4. You are receiving this because you were mentioned.Message ID: @.**@.>>