Open jpoley opened 5 years ago
Can I vote for a "free like Google Kubernetes Engine" option?
One +1 for me as well. I spent a full year under the covers of a corporate account and did a lot of work + gained a lot of experience with EKS.
Unfortunately changing jobs sometimes does not immediately grant you AWS access (per case). I would love to continue invest on EKS as a personal user but with a more reasonable price. Based on the above as a personal user of course I would not have the same SLA or requests, so a singe master or some relaxed SLAs about availability would be fine as long as I have access to the latest thing and get the updates.
Thanks for considering
Truly useful request, QA /staging and sandbox environments are perfect candidates for this. Right now we would still have to use kops just to bring up a single-master cluster in those environments whereas having the same single master setup with EKS would make everything so much more homogeneous.
We would love to see pricing change as well. With our current workload I can make the argument that the cost is worth the trade-off of us having to manage the Control Plane ourselves, but I don't know if I can justify that as my team continues to deploy more clusters in all environment types (Prod / Staging / Dev / etc.).
I too would love to see a pricing model similar to GCP or Azure where the Control Plane is free regardless of cluster size or purpose.
One additional perspective on why the expense of the control plane is problematic. In a large, multi-account environment that wants to use the AWS account as a strong boundary/blast radius orgs end up need to run dozens/hundreds of EKS clusters if EKS is to be a standardized part of the development platform. Many of these are dev/qa/sandbox accounts, but even for production in a model where you don't have 1 or a small handful of large-scale clusters a cheaper/free control plane would be a big win.
To @syates21 point - this is the main reason and this is really important!
GCP has had this (for a while now), AKS will have it soon - and as a leader in cloud AWS needs to have this as of yesterday.
It also gets more developers and those new to EKS using EKS (and by proxy AWS) if you can "play for free" and only pay for the worker nodes.
Can someone of the AWS Container team confirm AWS is actually considering this? Based on the current labels of this issue, it does not seem to be considered (which is contradicting some responses to linked issues) or at least not going to be happening any time soon.
I'd like to understand whether AWS will ever be competitive against other hyperscalers like Azure and GCP for enterprises fully adopting kubernetes with a typical multi-account and thus multi-cluster environment...
Can someone of the AWS Container team confirm AWS is actually considering this? Based on the current labels of this issue, it does not seem to be considered (which is contradicting some responses to linked issues) or at least not going to be happening any time soon.
I'd like to understand whether AWS will ever be competitive against other hyperscalers like Azure and GCP for enterprises fully adopting kubernetes with a typical multi-account and thus multi-cluster environment...
The reality is; they don't have to be. For new AWS customers, maybe they'll lose them - but for existing AWS customers, moving away from AWS to GC for 144/mth/cluster is just not an option. The cost of that knowledge transfer is massive.
What is really shit is paying for sandpit/nonprod environments at full rack rate, when some days they aren't even used.
Mind you, if you run your own k8s master, it's still going to end up costing about $100/mth to be as redundant as EKS. Saving isn't worth the administrative overhead when things go wrong.
Can someone of the AWS Container team confirm AWS is actually considering this? Based on the current labels of this issue, it does not seem to be considered (which is contradicting some responses to linked issues) or at least not going to be happening any time soon. I'd like to understand whether AWS will ever be competitive against other hyperscalers like Azure and GCP for enterprises fully adopting kubernetes with a typical multi-account and thus multi-cluster environment...
The reality is; they don't have to be. For new AWS customers, maybe they'll lose them - but for existing AWS customers, moving away from AWS to GC for 144/mth/cluster is just not an option. The cost of that knowledge transfer is massive.
What is really shit is paying for sandpit/nonprod environments at full rack rate, when some days they aren't even used.
Mind you, if you run your own k8s master, it's still going to end up costing about $100/mth to be as redundant as EKS. Saving isn't worth the administrative overhead when things go wrong.
A different point of view: At our shop we are AWS centric but we have clients demanding their data to be in GCP so part of our stack is also there. We are migrating our monolith to containers (EKS). Our kubernetes CICD is a self hosted JenkinsX (with Kops) that autoscales to several nodes. We're thinking on destroying the whole CICD thing during nights and weekends but it could actually easier to just move our JenkinsX cluster to GCP and simply use node scaling to 0 where it'll cost $0. With this in mind I think that actually they could also be losing money from existing clients.
It could be also the case of existing clients breaking their monoliths to k8s clusters with loads that don't depend on any AWS service.
A different point of view: At our shop we are AWS centric but we have clients demanding their data to be in GCP so part of our stack is also there. We are migrating our monolith to containers (EKS). Our kubernetes CICD is a self hosted JenkinsX (with Kops) that autoscales to several nodes. We're thinking on destroying the whole CICD thing during nights and weekends but it could actually easier to just move our JenkinsX cluster to GCP and simply use node scaling to 0 where it'll cost $0. With this in mind I think that actually they could also be losing money from existing clients.
It could be also the case of existing clients breaking their monoliths to k8s clusters with loads that don't depend on any AWS service.
Indeed, more and more companies are setting up multi cloud environments where things like k8s are quite easy to move...
Hi everyone,
We're excited to announce that as of today, we've reduced the price of Amazon EKS by 50% to $0.10 per hour (~$72 per month). We think this will go a long way towards making it cost-effective to use EKS clusters for smaller workloads, especially in development and test environments. We're not going to stop working to make EKS more accessible for more people and more workloads.
For now, you can learn more about the price drop on our blog.
-Nate
Hi Nate, that is great. Even better would be the option to be able to run dev/stage/qa/non-critical etc environments for close to $0 in costs. We have decided to not use AWS due to this and will go with Google instead for Kubernetes. It is sad because we really wanted to be with AWS for Kub. because we have a lot of other things with you and we are happy with your service overall.
Got an e-mail from Google today that seems relevant here:
Weโre making some changes to the way we offer Google Kubernetes Engine (GKE). Starting June 6, 2020, your GKE clusters will accrue a management fee of $0.10 per cluster per hour, irrespective of cluster size and topology.
So, Google's matching AWS on price rather than AWS matching Google, which is kinda the opposite of what we were all hoping for ๐
@tdmalone It's worth noting, however, that Google still provides a free option for those who only need a development environment:
The zonal exemption (1 zonal cluster free) allows you to subtract zonal cluster usage up to a maximum number of hours in a month. For example, if a month is 30 days long, then your zonal cluster exemption allows you to subtract a maximum of 720 hours (30 days x 24 hours) of zonal cluster usage from your billable hours per billing account.
That sounds exactly like the use case described in this issue.
Got an e-mail from Google today that seems relevant here:
Weโre making some changes to the way we offer Google Kubernetes Engine (GKE). Starting June 6, 2020, your GKE clusters will accrue a management fee of $0.10 per cluster per hour, irrespective of cluster size and topology.
So, Google's matching AWS on price rather than AWS matching Google, which is kinda the opposite of what we were all hoping for ๐
AWS' turn to win over the users now by removing the EKS charges ๐
There are many options that AWS could let EKS control panel to be Dev-cost friendly. I can think of multiple ways. One possible way is that AWS provide an option for single AZ EKS cluster. This version of EKS could have fewer capability, like supporting less maximum nodes, slower CPU, less IOPS, etc. Then AWS provide API/GUI to shutdown the whole EKS cluster. Then AWS could incur fewer management fee for this state.
I still believe that they can just extend free-tier to 1 eks cluster control pane, 73$ per user doesn't make a huge difference for them since this only enable new comers to bring more business on their platform.
Free tier 1 year EKS limited cluster size makes sense. They give away a year of a single redshift node, at a higher list cost. They are probably just not capable of having the expected number of clusters running. Everyone would spin up a cluster, and then after a year, spin it down.
it would be helpful, and likely increase adoption if there were a developer friendly price tier for a lower class (non redundant) control plane... as running a cluster is $144/mo without host nodes... too much for a developer.