Open da-ekchajzer opened 1 year ago
Adding a note, I previously asked CCF to update their replication factors here for AWS and S3 Standard (change from 3 to 6 the replication factor).
I think of 2 potential difficulties for this.
Taking the example of Azure, bloc storage can be Standard, Premium or Ultra, meaning more IOPS as you choose a higher performance class, so probably more performance driven infrastructure behind.
This probably doesn't mean more infrastructure, but it most probably means higher end disks.
s3-like storage means at least (base on what you can find in an open-source s3-like implementation like Ceph object storage / Rados Gateway):
bloc-storage on the other hand might be a bit simpler, containing at least :
Filesystem storage might be in the middle, needing less components than object-storage, but still being more complex than block-storage.
I imagine part of the modeling should account for the difference of complexity somehow.
I'm not sure how those two items should be implemented, but I think this is a modeling discussion worth having.
Extra note for the hackathon on May 24th:
Could we and how could we, account for the impact of not only storing data on a given storage service, but also accessing and writing this data ?
I hardly imagine we could use something finer than a per GB impact factor regarding network traffic, probably different for each type of service, and maybe something very estimative regarding the compute part for delivering the data (that might be more important in object storage than other services).
Some notes from hackathon:
CCF coefficients: https://docs.google.com/spreadsheets/d/1D7mIGKkdO1djPoMVmlXRmzA7_4tTiGZLYdVbfe85xQM/edit#gid=735227650 /!\ it may differ from the coef implemented: https://github.com/cloud-carbon-footprint/cloud-carbon-footprint/blob/ee6f0e71d80e22fef9ed76846979bf4123562da2/packages/aws/src/domain/AwsFootprintEstimationConstants.ts#L130
Content of the framaform hackathonOrange2024:
Object storage impacts
Formula
Impact Use = FE_elec PUE conso_storage(storage_class) Storage_user resiliency_cost(storage_class) * Remplissage + Compute Where :
conso(storage_class)
Resiliency_cost(storage_class) = ( Bytes physiques Nécessaires) / (Bytes utilisateur resilient) = ( resilience_region * local_redundancy ) Where
Impact Fab = Fab(temp) + compute
Fab(temp) = FE_fab_fixe + FE_fabvariable densité Reservation resiliency_cost Remplissage
r remplissage =
FE_fab_fixe: exemple cout du casing
S3 : selon la classe de stockage , le niveau de réplication change cf https://aws.amazon.com/s3/storage-classes/ paragraphe performance chart: (multi AZ vs single AZ)
Azure object storage:
niveau de réplication semble indépendante de la classe de stockage
Exemple One Zone (cas un peu particulier du S3)
Autres sources d'info: YouTube - FAST '23 - Building and Operating a Pretty Big Storage System (My Adventures in Amazon S3) https://www.youtube.com/watch?v=sc3J4McebHE
Dénomination azure des classes de redondance: LRS (local redundant storage): minimum de redondance (equivalent 2 serveurs au sein d'une unique zone/dc) ZRS (Zone redundant storage) multi zone (dc's indépendants) au sein d'une unique région GZRS: globally redundant storage: même chose que précédement mais multi region https://learn.microsoft.com/en-us/azure/storage/common/storage-redundancy
Visualiser les couts du versioning S3
https://cloudiamo.com/2020/06/20/figuring-out-the-cost-of-versioning-on-amazon-s3/
Cout des versions d'objet dans S3
=> Considéré comme du stockage classique (c'est à dire taille totale facturée est taille de la v0 + taille de la v1)
- Voir How am I charged for using Versioning dans https://aws.amazon.com/s3/faqs/
Cas d'usage
S3 https://aws.amazon.com/s3/storage-classes/?nc1=h_ls
Block Storage
Updated diagram showing what we intend to exclude in v1:
note : add usage time dimension to the formula
Problem
Most of the cloud instances we implement doesn't include storage since cloud providers offers storage as a service which can be linked to an instance. We should add a way to compute the impacts of storage as a service from the impacts of storage components.
Solution
General case
Create a storage_as_a_service router.
We should compute the impacts of a classic disk from user (or default) characteristics :
And compute the impacts per unit of storage from the value given by the user to finally get the impact for the total amount of storage used by the user.
impact = ((impact(disk) / disk.capacity) * usage.storage) * replication
archetypes
We could pre-record some values to complete the fields with the characteristics from specific type of services from specific providers.
Most of the value will be hard to get for specific providers. Replication factors have been gathered for GCP, AZURE and AWS here : https://docs.google.com/spreadsheets/d/1D7mIGKkdO1djPoMVmlXRmzA7_4tTiGZLYdVbfe85xQM/edit#gid=735227650