Open viniciusdc opened 1 year ago
Ok, so far, here are my findings based on the following:
The assumptions so far:
Profile | CPU Lim | MEM Lim | Instance type | GPU type | #GPU | MEM Price (Gb) | vCPU Price | GPU Price | Total |
---|---|---|---|---|---|---|---|---|---|
Small | 2 | 8 | n1-standard-4 | 0 | $0.004237 | $0.031611 | $0.097118 | ||
Medium | 4 | 16 | n1-standard-4 | 0 | $0.004237 | $0.031611 | $0.194236 | ||
Large | 90 | 375 | n2-standard-96 | 0 | $0.004237 | $0.031611 | $4.433865 | ||
Single GPU | 8 | 30 | n1-standard-8 | nvidia-tesla-k80 | 1 | $0.004237 | $0.031611 | $0.450000 | $0.829998 |
Multi GPU (x4) | 8 | 30 | n1-standard-8 | nvidia-tesla-k80 | 4 | $0.004237 | $0.031611 | $0.450000 | $2.179998 |
Single A100 | 16 | 60 | a2-highgpu-1g | 1 | $0.004237 | $0.031611 | $2.934000 | $3.693996 |
Node | Node cost | Storage shared (NFS) | Price (Gib) | Storage shared cost |
---|---|---|---|---|
General (8/32 - n1-standard-8) | $0.388472 | 500 (conda-store) | $0.000274 | $0.137000 |
Worker (4/16 - n1-standard-4) | $0.194236 | 200 (jupyter) | $0.000274 | $0.054800 |
*Those storage costs are based on FileStore, the resource commonly used for NFS storage, and I am using the Standard one for the expenses.
Other volumes | Allocation (GI) | Total Price (all standard) Monthly |
---|---|---|
data-keycloak | 8 | $0.32 |
data-nebari-conda-store-postgresql | 8 | $0.32 |
hub-db-dir | 1 | $0.04 |
nebari-conda-store-minio | 500 | $20.00 |
nebari-conda-store-storage | 500 | $20.00 |
nfs-server-nfs-storage | 200 | $8.00 |
redis-data | 8 | $0.32 |
Let's start by adding up the costs for each user profile, assuming usage of (1, 1, 6, 4, 4,1) hours, respectively, over four days a week in 1 month:
Small: (1 hour/day) x (4 days/week) x (1 month) = 16 hours CPU Lim: 2 MEM Lim: 8 GB Instance type: n1-standard-4 Total cost: (16 hours) x ($0.031611/vCPU/hour + $0.004237/GB/hour) = $1.553888
Medium: (1 hour/day) x (4 days/week) x (1 month) = 16 hours
CPU Lim: 4
MEM Lim: 16 GB
Instance type: n1-standard-4
Total cost: (16 hours) x ($0.031611/vCPU/hour + $0.004237/GB/hour) = $3.107776
Large: (6 hours/day) x (4 days/week) x (1 month) = 96 hours CPU Lim: 90 MEM Lim: 375 GB Instance type: n2-standard-96 Total cost: (96 hours) x ($0.031611/vCPU/hour + $0.004237/GB/hour) = $425.651040
Single GPU: (4 hours/day) x (4 days/week) x (1 month) = 64 hours CPU Lim: 8 MEM Lim: 30 GB Instance type: n1-standard-8 GPU type: nvidia-tesla-k80 Total cost: (64 hours) x ($0.031611/vCPU/hour + $0.004237/GB/hour + $0.450000/GPU/hour) = $53.119872
Single A100: (1 hours/day) x (4 days/week) x (1 month) = 16 hours CPU Lim: 16 MEM Lim: 60 GB Instance type: a2-highgpu-1g GPU type: nvidia-ampere-100 Total cost: (16 hours) x ($0.031611/vCPU/hour + $0.004237/GB/hour + $2.934000/GPU/hour) = $59.103936
Multi GPU (x4): (4 hours/day) x (4 days/week) x (1 month) = 64 hours CPU Lim: 8 MEM Lim: 30 GB Instance type: n1-standard-8 GPU type: nvidia-tesla-k80 (x4) Total cost: (64 hours) x ($0.031611/vCPU/hour + $0.004237/GB/hour + $0.450000/GPU/hour) = $139.519872
Adding up all the user costs gives us $682.056384 for the month.
The General and Worker nodes will be used constantly, so we can calculate their monthly cost as follows:
Both nodes use the shared NFS storage as well as the persistent volumes and will incur a monthly cost of:
Finally, adding the user and infrastructure costs, we get $1.320,90 for the month.
@viniciusdc please make this into a python function or a google spreadsheet
This is a good source for a FAQ on the cost of running Nebari. Thanks @viniciusdc!
This issue aims to track the work involved in conducting an infrastructure audit and generating cost estimates for daily user usage. The goal is to gain a comprehensive view of the costs associated with each resource, allowing for informed decisions about resource allocation and optimization. This work will be extended to other cloud providers to gain a comprehensive view of the costs associated with each resource on each provider.
Tasks
Additional Information
Feel free to adjust the wording and formatting to fit your specific needs.