Open vishalbollu opened 4 years ago
+1. This is critical for a cost-effective deployment.
Hi, I'd like to look into this issue if anyone can help me get started.
@lezwon thanks for your interest!
I think the first step is to figure out how to create an eks cluster with instances that have elastic inference attached. Currently, Cortex uses eksctl to create the cluster, and based on https://github.com/weaveworks/eksctl/issues/643, it looks like eksctl might not support elastic inference yet. But I am not sure if that's the case, or if there is a workaround; it could be worth reaching out to the eksctl team to inquire.
@RobertLucian or @vishalbollu, do you have any additional context on this?
@deliahu Thank you for the help. I'll look into the issue you mentioned with eksctl. :)
@lezwon sounds good, thank you, keep us posted!
This issue has been depriorized and the relevant eksctl issue is closed for inactivity but using EI would be cost saving for most of the Cortex users. Is there any plan to solve this issue in the following releases?
@H4dr1en we have added multi instance types clusters as a feature recently. This can mitigate costs already, by allowing to run both CPU / GPU and Spot instances in the same cluster.
I know it is not remotely the same as Elastic Inference, but it is an improvement :)
We will look into Elastic Inference again soon since we are re-focusing the team's efforts on improving the Cortex UX on AWS.
Description
Instead of spinning up a GPU nodegroup, spin up a CPU nodegroup with Elastic Inference (GPU accelerated inference).
Additional Context