Closed gdrudy closed 5 years ago
Hey @gdrudy, this feature has not made it into a release yet. Sorry for the confusion. Please feel free to build from master if you want to try out this feature now.
make eksctl
We are in the middle of reviewing our release process. We will have a changelog to prevent this kind of issue in the future.
Thanks @cristian-radu when I thought about it I reckoned that was the case. I'll build from master and try it out. Thanks.
What happened? I'm trying to deploy a highly available NAT gateway, looking through the code I see what looks like a command line option: vpc-nat-mode. Though it doesn't work when I tried using it:
$ eksctl create cluster --name gerryd-eksctl-ha --nodegroup-name standard-workers --node-type t3.medium --nodes 3 --nodes-min 1 --nodes-max 4 --node-ami auto --region us-east-1 --node-private-networking --ssh-access --vpc-nat-mode HighlyAvailable Error: unknown flag: --vpc-nat-mode Usage: eksctl create cluster [flags]
General flags: -n, --name string EKS cluster name (generated if unspecified, e.g. "exciting-badger-1561731139") --tags stringToString A list of KV pairs used to tag the AWS resources (e.g. "Owner=John Doe,Team=Some Team") (default []) -r, --region string AWS region --zones strings (auto-select if unspecified) --version string Kubernetes version (valid options: 1.10, 1.11, 1.12, 1.13) (default "1.12") -f, --config-file string load configuration from a file
Initial nodegroup flags: --nodegroup-name string name of the nodegroup (generated if unspecified, e.g. "ng-fbdefd74") --without-nodegroup if set, initial nodegroup will not be created -t, --node-type string node instance type (default "m5.large") -N, --nodes int total number of nodes (for a static ASG) (default 2) -m, --nodes-min int minimum nodes in ASG (default 2) -M, --nodes-max int maximum nodes in ASG (default 2) --node-volume-size int node volume size in GB --node-volume-type string node volume type (valid options: gp2, io1, sc1, st1) (default "gp2") --max-pods-per-node int maximum number of pods per node (set automatically if unspecified) --ssh-access control SSH access for nodes. Uses ~/.ssh/id_rsa.pub as default key path if enabled --ssh-public-key string SSH public key to use for nodes (import from local path, or use existing EC2 key pair) --node-ami string Advanced use cases only. If 'static' is supplied (default) then eksctl will use static AMIs; if 'auto' is supplied then eksctl will automatically set the AMI based on version/region/instance type; if any other value is supplied it will override the AMI to use for the nodes. Use with extreme care. (default "static") --node-ami-family string Advanced use cases only. If 'AmazonLinux2' is supplied (default), then eksctl will use the official AWS EKS AMIs (Amazon Linux 2); if 'Ubuntu1804' is supplied, then eksctl will use the official Canonical EKS AMIs (Ubuntu 18.04). (default "AmazonLinux2") -P, --node-private-networking whether to make nodegroup networking private --node-security-groups strings Attach additional security groups to nodes, so that it can be used to allow extra ingress/egress access from/to pods --node-labels stringToString Extra labels to add when registering the nodes in the nodegroup, e.g. "partition=backend,nodeclass=hugememory" (default []) --node-zones strings (inherited from the cluster if unspecified)
Cluster and nodegroup add-ons flags: --asg-access enable IAM policy for cluster-autoscaler --external-dns-access enable IAM policy for external-dns --full-ecr-access enable full access to ECR --appmesh-access enable full access to AppMesh --alb-ingress-access enable full access for alb-ingress-controller --storage-class if true (default) then a default StorageClass of type gp2 provisioned by EBS will be created (default true)
VPC networking flags: --vpc-cidr ipNet global CIDR to use for VPC (default 192.168.0.0/16) --vpc-private-subnets strings re-use private subnets of an existing VPC --vpc-public-subnets strings re-use public subnets of an existing VPC --vpc-from-kops-cluster string re-use VPC from a given kops cluster
AWS client flags: -p, --profile string AWS credentials profile to use (overrides the AWS_PROFILE environment variable) --timeout duration max wait time in any polling operations (default 25m0s) --cfn-role-arn string IAM role used by CloudFormation to call AWS API on your behalf
Output kubeconfig flags: --kubeconfig string path to write kubeconfig (incompatible with --auto-kubeconfig) (default "/home/gerryd/.kube/config") --set-kubeconfig-context if true then current-context will be set in kubeconfig; if a context is already set then it will be overwritten (default true) --auto-kubeconfig save kubeconfig file by cluster name, e.g. "/home/gerryd/.kube/eksctl/clusters/exciting-badger-1561731139" --write-kubeconfig toggle writing of kubeconfig (default true)
Common flags: -C, --color string toggle colorized logs (true,false,fabulous) (default "true") -h, --help help for this command -v, --verbose int set log level, use 0 to silence, 4 for debugging and 5 for debugging with AWS debug logging (default 3)
Use 'eksctl create cluster [command] --help' for more information about a command.
vpc-nat-mode is not documented in the help, so it's likely I'm mistaken about it being a command line option!
Though when I tried setting vpc.nat.gateway in the config file (per examples/09-nat-gateways.yaml):
vpc:
nat:
gateway: HighlyAvailable # other options: Disable, Single (default)
I get the following failure:
eksctl create cluster -f simple_cluster.yaml [✖] loading config file "simple_cluster.yaml": error unmarshaling JSON: while decoding JSON: json: unknown field "nat"
Versions Please paste in the output of these commands: