Open SoMuchForSubtlety opened 2 years ago
Nice. I think we are trying to get away from scraping but I’ll check the cost APIs and see if I can pull it from there.
This is actually useful a lot for Elasticache, there is a Cloudwatch metric that shows "Network Bandwidth In Allowance Exceeded" and "Network Bandwidth Out Allowance Exceeded". The burst bandwidth is useful, but ultimately is just that - burst balance. Baseline bandwdith is far more useful for sustained high bandwidth compute (e.g. Elasticache).
AWS can drop packets in this situation, and there is no Cloudwatch Metric to show 'dropped packets'.
We use the r6g
series for Elasticache, and the baseline bandwdith does scale exactly with instance size either - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/memory-optimized-instances.html
Therefore adding these metrics to this tool would make it even more amazing. For bonus points, making it a column, so we can compare easily.
Is it possible to extract data from DescribeInstanceTypes
?
❯ aws ec2 --region us-east-1 describe-instance-types \
--filters "Name=instance-type,Values=r6g.*" \
--query "InstanceTypes[].[InstanceType, NetworkInfo.NetworkPerformance, NetworkInfo.NetworkCards[0].BaselineBandwidthInGbps,NetworkInfo.NetworkCards[0].PeakBandwidthInGbps]" \
--output table
------------------------------------------------------
| DescribeInstanceTypes |
+---------------+--------------------+-------+-------+
| r6g.medium | Up to 10 Gigabit | 0.5 | 10.0 |
| r6g.16xlarge | 25 Gigabit | 25.0 | 25.0 |
| r6g.large | Up to 10 Gigabit | 0.75 | 10.0 |
| r6g.8xlarge | 12 Gigabit | 12.0 | 12.0 |
| r6g.4xlarge | Up to 10 Gigabit | 5.0 | 10.0 |
| r6g.12xlarge | 20 Gigabit | 20.0 | 20.0 |
| r6g.metal | 25 Gigabit | 25.0 | 25.0 |
| r6g.2xlarge | Up to 10 Gigabit | 2.5 | 10.0 |
| r6g.xlarge | Up to 10 Gigabit | 1.25 | 10.0 |
+---------------+--------------------+-------+-------+
"Up to X Gbps" is not very useful, thankfully AWS provides the actual baseline and burst network bandwidths. Here are the links for general purpose, compute optimized, memory optimized, storage optimized and accelerated computing.