Open otterley opened 2 years ago
@otterley We have made rapid improvements to blueprints to support graviton. We have also created a new Blueprints GravitonBuilder
for graviton with documentation to support Graviton Patterns. Please check this and let us know if this ticket can be closed.
I looked at GravitonBuilder
, but it is unclear to me whether it fulfills the UX I described above.
GravitonBuilder
appears to build an EKS cluster and attach a Graviton-based node group to it. But that's not how most customers use EKS clusters with Graviton. Instead, customers typically run a mix of Graviton and x86 nodes depending on their specific requirements. The might be in the midst of an architecture transition; they might want instance type flexibility (especially for workloads that run on Spot instances), and they might have pods that cannot be migrated to Graviton for historical or proprietary reasons.
Hence, this request is all about making it easier for customers to supply and attach both x86 and Graviton nodes to clusters as they deem fit.
Describe the bug
GenericClusterProvider autoscalingNodeGroups should support Graviton. Currently it doesn't give the specify the correct AMI variant for the arm64 CPU architecture when creating the launch configuration.
Separately - though this is less important - the stack is creating a Launch Configuration instead of a Launch Template. Modern stacks should use EC2 Launch Templates as they have more flexibility and can be reused outside the Auto Scaling Group context.
Expected Behavior
Create a cluster using
GenericClusterProvider
with the followingautoscalingNodeGroups
attribute:And have the correct AMI for arm64 be used.
Current Behavior
An AMI for x86_64 was specified in the Launch Template instead, causing the scale-out to fail with the following error message:
Reproduction Steps
See above
Possible Solution
No response
Additional Information/Context
No response
CDK CLI Version
2.23.0
EKS Blueprints Version
1.0.0
Node.js Version
v16.15.0
Environment details (OS name and version, etc.)
-
Other information
No response