-
### Problem Summary
In the code you can provision 50+ labs however when doing a teardown of 50 or greater the teardown script will fail due to a limitation in AWS BOTO b/c it has a max limit of 200…
-
### Description
**What problem are you trying to solve?**
Tag EC2 Nodes deployed by karpenter without specify in NodeClasses
**How important is this feature to you?**
We use a strategy where…
-
I'm attempting to run [spark-thriftserver](http://spark.apache.org/docs/latest/sql-distributed-sql-engine.html#running-the-thrift-jdbcodbc-server) using this scheduler extender. If you're not familiar…
PeteW updated
4 years ago
-
Unsure if this is a bug _quite_ yet. But with a customer using the CCM, we're seeing the following in a cluster that scales up and down by several hundred nodes pretty often:
```
E0410 21:54:21.05…
-
### What happened + What you expected to happen
We have ray cluster with ec2 cluster with autoscaling (min:2 max: 10 nodes).
Our load is ~3000 tasks.
After we start our load we expect cluster ut…
-
**What version of the component are you using?**
v1.28.0
**EKS Version**: 1.28
**Error logs**
clusterstate.go:1033] Failed to check cloud provider has instance for ip-xxx-xx-xx-xx.ec2.internal…
uzayr updated
4 weeks ago
-
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the…
-
For some time we've been noticing network related errors on AWS hubs when scaling up from zero nodes. This hasn't been too much of a concern because things usually recover on their own, for example, l…
-
I am having issues getting my first EC2 instance to work with session manager.
I have configured Systems Manger to automatically convert new EC2 Instances to managed nodes, per [this documentation…
-
**Describe the bug**
When there are multiple pods running in the EKS environment EKS node members have multiple Private IP addresses based on number of pods running on specific node, nginx-asg-sync f…