coreos / terraform-aws-kubernetes

Install a Kubernetes cluster the CoreOS Tectonic Way: HA, self-hosted, RBAC, etcd Operator, and more
Apache License 2.0
117 stars 67 forks source link

terraform init - errors #20

Open TinajaLabs opened 6 years ago

TinajaLabs commented 6 years ago

I'm trying to frame up a kubernetes demo for testing.

Although I don't see anywhere how to create a module, I looked at the example and even though there is some mention of not editing the file because of some make overwrite, I used it as the base of my terraform config.

When I run terraform init it goes through the process of loading all the supporting modules from github. Then it throws some errors:

MacBook-Air-2:kuber cjefferies$ terraform init
Initializing modules...
- module.kubernetes
- module.kubernetes.container_linux
- module.kubernetes.vpc
- module.kubernetes.etcd
- module.kubernetes.ignition_masters
- module.kubernetes.masters
- module.kubernetes.ignition_workers
- module.kubernetes.workers
- module.kubernetes.dns
- module.kubernetes.kube_certs
- module.kubernetes.etcd_certs
- module.kubernetes.ingress_certs
- module.kubernetes.identity_certs
- module.kubernetes.bootkube
- module.kubernetes.tectonic
- module.kubernetes.flannel_vxlan
- module.kubernetes.calico
- module.kubernetes.canal

Warning: output "etcd_sg_id": must use splat syntax to access aws_security_group.etcd attribute "id", because it has "count" set; use aws_security_group.etcd.*.id to obtain a list of the attributes across all instances

Warning: output "aws_api_external_dns_name": must use splat syntax to access aws_elb.api_external attribute "dns_name", because it has "count" set; use aws_elb.api_external.*.dns_name to obtain a list of the attributes across all instances

Warning: output "aws_elb_api_external_zone_id": must use splat syntax to access aws_elb.api_external attribute "zone_id", because it has "count" set; use aws_elb.api_external.*.zone_id to obtain a list of the attributes across all instances

Warning: output "aws_api_internal_dns_name": must use splat syntax to access aws_elb.api_internal attribute "dns_name", because it has "count" set; use aws_elb.api_internal.*.dns_name to obtain a list of the attributes across all instances

Warning: output "aws_elb_api_internal_zone_id": must use splat syntax to access aws_elb.api_internal attribute "zone_id", because it has "count" set; use aws_elb.api_internal.*.zone_id to obtain a list of the attributes across all instances

Error: module "kubernetes": missing required argument "tectonic_admin_email"

Error: module "kubernetes": missing required argument "tectonic_admin_password"

Here is my main.tf:

module "kubernetes" {

  source = "../../modules/terraform-aws-kubernetes"

  // (optional) Extra AWS tags to be applied to created autoscaling group resources.
  // This is a list of maps having the keys `key`, `value` and `propagate_at_launch`.
  // 
  // Example: `[ { key = "foo", value = "bar", propagate_at_launch = true } ]`
  // tectonic_autoscaling_group_extra_tags = ""

  // (optional) Unique name under which the Amazon S3 bucket will be created. Bucket name must start with a lower case name and is limited to 63 characters.
  // The Tectonic Installer uses the bucket to store tectonic assets and kubeconfig.
  // If name is not provided the installer will construct the name using "tectonic_cluster_name", current AWS region and "tectonic_base_domain"
  // tectonic_aws_assets_s3_bucket_name = ""

  // Instance size for the etcd node(s). Example: `t2.medium`. Read the [etcd recommended hardware](https://coreos.com/etcd/docs/latest/op-guide/hardware.html) guide for best performance
  tectonic_aws_etcd_ec2_type = "t2.medium"

  // (optional) List of additional security group IDs for etcd nodes.
  // 
  // Example: `["sg-51530134", "sg-b253d7cc"]`
  // tectonic_aws_etcd_extra_sg_ids = ""

  // The amount of provisioned IOPS for the root block device of etcd nodes.
  // Ignored if the volume type is not io1.
  tectonic_aws_etcd_root_volume_iops = "100"
  // The size of the volume in gigabytes for the root block device of etcd nodes.
  tectonic_aws_etcd_root_volume_size = "30"
  // The type of volume for the root block device of etcd nodes.
  tectonic_aws_etcd_root_volume_type = "gp2"

  // (optional) List of subnet IDs within an existing VPC to deploy master nodes into.
  // Required to use an existing VPC and the list must match the AZ count.
  // 
  // Example: `["subnet-111111", "subnet-222222", "subnet-333333"]`
  // tectonic_aws_external_master_subnet_ids = ""

  // (optional) If set, the given Route53 zone ID will be used as the internal (private) zone.
  // This zone will be used to create etcd DNS records as well as internal API and internal Ingress records.
  // If set, no additional private zone will be created.
  // 
  // Example: `"Z1ILINNUJGTAO1"`
  // tectonic_aws_external_private_zone = ""

  // (optional) ID of an existing VPC to launch nodes into.
  // If unset a new VPC is created.
  // 
  // Example: `vpc-123456`
  // tectonic_aws_external_vpc_id = ""

  // (optional) List of subnet IDs within an existing VPC to deploy worker nodes into.
  // Required to use an existing VPC and the list must match the AZ count.
  // 
  // Example: `["subnet-111111", "subnet-222222", "subnet-333333"]`
  // tectonic_aws_external_worker_subnet_ids = ""

  // (optional) Extra AWS tags to be applied to created resources.
  // tectonic_aws_extra_tags = ""

  // (optional) This configures master availability zones and their corresponding subnet CIDRs directly.
  // 
  // Example:
  // `{ eu-west-1a = "10.0.0.0/20", eu-west-1b = "10.0.16.0/20" }`
  // tectonic_aws_master_custom_subnets = ""

  // Instance size for the master node(s). Example: `t2.medium`.
  tectonic_aws_master_ec2_type = "t2.medium"

  // (optional) List of additional security group IDs for master nodes.
  // 
  // Example: `["sg-51530134", "sg-b253d7cc"]`
  // tectonic_aws_master_extra_sg_ids = ""

  // (optional) Name of IAM role to use for the instance profiles of master nodes.
  // The name is also the last part of a role's ARN.
  // 
  // Example:
  //  * Role ARN  = arn:aws:iam::123456789012:role/tectonic-installer
  //  * Role Name = tectonic-installer
  // tectonic_aws_master_iam_role_name = ""

  // The amount of provisioned IOPS for the root block device of master nodes.
  // Ignored if the volume type is not io1.
  tectonic_aws_master_root_volume_iops = "100"
  // The size of the volume in gigabytes for the root block device of master nodes.
  tectonic_aws_master_root_volume_size = "30"
  // The type of volume for the root block device of master nodes.
  tectonic_aws_master_root_volume_type = "gp2"

  // (optional) If set to true, create private-facing ingress resources (ELB, A-records).
  // If set to false, no private-facing ingress resources will be provisioned and all DNS records will be created in the public Route53 zone.
  // tectonic_aws_private_endpoints = true

  // (optional) This declares the AWS credentials profile to use.
  tectonic_aws_profile = "acct_dev"

  // (optional) If set to true, create public-facing ingress resources (ELB, A-records).
  // If set to false, no public-facing ingress resources will be created.
  // tectonic_aws_public_endpoints = true

  // The target AWS region for the cluster.
  tectonic_aws_region = "us-west-1"
  // Name of an SSH key located within the AWS region. Example: coreos-user.
  tectonic_aws_ssh_key = "myPemKey"
  // Block of IP addresses used by the VPC.
  // This should not overlap with any other networks, such as a private datacenter connected via Direct Connect.
  tectonic_aws_vpc_cidr_block = "10.0.0.0/16"

  // (optional) This configures worker availability zones and their corresponding subnet CIDRs directly.
  // 
  // Example: `{ eu-west-1a = "10.0.64.0/20", eu-west-1b = "10.0.80.0/20" }`
  // tectonic_aws_worker_custom_subnets = ""

  // Instance size for the worker node(s). Example: `t2.medium`.
  tectonic_aws_worker_ec2_type = "t2.medium"

  // (optional) List of additional security group IDs for worker nodes.
  // 
  // Example: `["sg-51530134", "sg-b253d7cc"]`
  // tectonic_aws_worker_extra_sg_ids = ""

  // (optional) Name of IAM role to use for the instance profiles of worker nodes.
  // The name is also the last part of a role's ARN.
  // 
  // Example:
  //  * Role ARN  = arn:aws:iam::123456789012:role/tectonic-installer
  //  * Role Name = tectonic-installer
  // tectonic_aws_worker_iam_role_name = ""

  // (optional) List of ELBs to attach all worker instances to.
  // This is useful for exposing NodePort services via load-balancers managed separately from the cluster.
  // 
  // Example:
  //  * `["ingress-nginx"]`
  // tectonic_aws_worker_load_balancers = ""

  // The amount of provisioned IOPS for the root block device of worker nodes.
  // Ignored if the volume type is not io1.
  tectonic_aws_worker_root_volume_iops = "100"
  // The size of the volume in gigabytes for the root block device of worker nodes.
  tectonic_aws_worker_root_volume_size = "30"
  // The type of volume for the root block device of worker nodes.
  tectonic_aws_worker_root_volume_type = "gp2"
  // The base DNS domain of the cluster. It must NOT contain a trailing period. Some
  // DNS providers will automatically add this if necessary.
  // 
  // Example: `openstack.dev.coreos.systems`.
  // 
  // Note: This field MUST be set manually prior to creating the cluster.
  // This applies only to cloud platforms.
  // 
  // [Azure-specific NOTE]
  // To use Azure-provided DNS, `tectonic_base_domain` should be set to `""`
  // If using DNS records, ensure that `tectonic_base_domain` is set to a properly configured external DNS zone.
  // Instructions for configuring delegated domains for Azure DNS can be found here: https://docs.microsoft.com/en-us/azure/dns/dns-delegate-domain-azure-dns
  tectonic_base_domain = ""

  // (optional) The content of the PEM-encoded CA certificate, used to generate Tectonic Console's server certificate.
  // If left blank, a CA certificate will be automatically generated.
  // tectonic_ca_cert = ""

  // (optional) The content of the PEM-encoded CA key, used to generate Tectonic Console's server certificate.
  // This field is mandatory if `tectonic_ca_cert` is set.
  // tectonic_ca_key = ""

  // (optional) The algorithm used to generate tectonic_ca_key.
  // The default value is currently recommended.
  // This field is mandatory if `tectonic_ca_cert` is set.
  // tectonic_ca_key_alg = "RSA"

  // (optional) This declares the IP range to assign Kubernetes pod IPs in CIDR notation.
  // tectonic_cluster_cidr = "10.2.0.0/16"

  // The name of the cluster.
  // If used in a cloud-environment, this will be prepended to `tectonic_base_domain` resulting in the URL to the Tectonic console.
  // 
  // Note: This field MUST be set manually prior to creating the cluster.
  // Warning: Special characters in the name like '.' may cause errors on OpenStack platforms due to resource name constraints.
  tectonic_cluster_name = ""

  // (optional) The Container Linux update channel.
  // 
  // Examples: `stable`, `beta`, `alpha`
  // tectonic_container_linux_channel = "stable"

  // The Container Linux version to use. Set to `latest` to select the latest available version for the selected update channel.
  // 
  // Examples: `latest`, `1465.6.0`
  tectonic_container_linux_version = "latest"

  // (optional) A list of PEM encoded CA files that will be installed in /etc/ssl/certs on etcd, master, and worker nodes.
  // tectonic_custom_ca_pem_list = ""

  // (optional) This only applies if you use the modules/dns/ddns module.
  // 
  // Specifies the RFC2136 Dynamic DNS server key algorithm.
  // tectonic_ddns_key_algorithm = ""

  // (optional) This only applies if you use the modules/dns/ddns module.
  // 
  // Specifies the RFC2136 Dynamic DNS server key name.
  // tectonic_ddns_key_name = ""

  // (optional) This only applies if you use the modules/dns/ddns module.
  // 
  // Specifies the RFC2136 Dynamic DNS server key secret.
  // tectonic_ddns_key_secret = ""

  // (optional) This only applies if you use the modules/dns/ddns module.
  // 
  // Specifies the RFC2136 Dynamic DNS server IP/host to register IP addresses to.
  // tectonic_ddns_server = ""

  // (optional) DNS prefix used to construct the console and API server endpoints.
  // tectonic_dns_name = ""

  // (optional) The size in MB of the PersistentVolume used for handling etcd backups.
  // tectonic_etcd_backup_size = "512"

  // (optional) The name of an existing Kubernetes StorageClass that will be used for handling etcd backups.
  // tectonic_etcd_backup_storage_class = ""

  // (optional) The path of the file containing the CA certificate for TLS communication with etcd.
  // 
  // Note: This works only when used in conjunction with an external etcd cluster.
  // If set, the variable `tectonic_etcd_servers` must also be set.
  // tectonic_etcd_ca_cert_path = "/dev/null"

  // (optional) The path of the file containing the client certificate for TLS communication with etcd.
  // 
  // Note: This works only when used in conjunction with an external etcd cluster.
  // If set, the variables `tectonic_etcd_servers`, `tectonic_etcd_ca_cert_path`, and `tectonic_etcd_client_key_path` must also be set.
  // tectonic_etcd_client_cert_path = "/dev/null"

  // (optional) The path of the file containing the client key for TLS communication with etcd.
  // 
  // Note: This works only when used in conjunction with an external etcd cluster.
  // If set, the variables `tectonic_etcd_servers`, `tectonic_etcd_ca_cert_path`, and `tectonic_etcd_client_cert_path` must also be set.
  // tectonic_etcd_client_key_path = "/dev/null"

  // The number of etcd nodes to be created.
  // If set to zero, the count of etcd nodes will be determined automatically.
  // 
  // Note: This is not supported on bare metal.
  tectonic_etcd_count = "0"

  // (optional) List of external etcd v3 servers to connect with (hostnames/IPs only).
  // Needs to be set if using an external etcd cluster.
  // Note: If this variable is defined, the installer will not create self-signed certs.
  // To provide a CA certificate to trust the etcd servers, set "tectonic_etcd_ca_cert_path".
  // 
  // Example: `["etcd1", "etcd2", "etcd3"]`
  // tectonic_etcd_servers = ""

  // (optional) If set to `true`, all etcd endpoints will be configured to use the "https" scheme.
  // 
  // Note: If `tectonic_experimental` is set to `true` this variable has no effect, because the experimental self-hosted etcd always uses TLS.
  // tectonic_etcd_tls_enabled = true

  // The path to the tectonic licence file.
  // You can download the Tectonic license file from your Account overview page at [1].
  // 
  // [1] https://account.coreos.com/overview
  // 
  // Note: This field MUST be set manually prior to creating the cluster unless `tectonic_vanilla_k8s` is set to `true`.
  tectonic_license_path = ""
  // The number of master nodes to be created.
  // This applies only to cloud platforms.
  tectonic_master_count = "1"

  // (optional) Configures the network to be used in Tectonic. One of the following values can be used:
  // 
  // - "flannel": enables overlay networking only. This is implemented by flannel using VXLAN.
  // 
  // - "canal": [ALPHA] enables overlay networking including network policy. Overlay is implemented by flannel using VXLAN. Network policy is implemented by Calico.
  // 
  // - "calico": [ALPHA] enables BGP based networking. Routing and network policy is implemented by Calico. Note this has been tested on baremetal installations only.
  // tectonic_networking = "flannel"

  // The path the pull secret file in JSON format.
  // This is known to be a "Docker pull secret" as produced by the docker login [1] command.
  // A sample JSON content is shown in [2].
  // You can download the pull secret from your Account overview page at [3].
  // 
  // [1] https://docs.docker.com/engine/reference/commandline/login/
  // 
  // [2] https://coreos.com/os/docs/latest/registry-authentication.html#manual-registry-auth-setup
  // 
  // [3] https://account.coreos.com/overview
  // 
  // Note: This field MUST be set manually prior to creating the cluster unless `tectonic_vanilla_k8s` is set to `true`.
  tectonic_pull_secret_path = ""

  // (optional) This declares the IP range to assign Kubernetes service cluster IPs in CIDR notation.
  // The maximum size of this IP range is /12
  // tectonic_service_cidr = "10.3.0.0/16"

  // Validity period of the self-signed certificates (in hours).
  // Default is 3 years.
  // This setting is ignored if user provided certificates are used.
  tectonic_tls_validity_period = "26280"
  // If set to true, a vanilla Kubernetes cluster will be deployed, omitting any Tectonic assets.
  tectonic_vanilla_k8s = true
  // The number of worker nodes to be created.
  // This applies only to cloud platforms.
  tectonic_worker_count = "3"
}

I wish there was a little more explanation to get a first frame-up running. Perhaps an example.tf that provides the minimum workable variables to configure for an initial install.

Thanks, Chris.

TinajaLabs commented 6 years ago

I seem to have moved a step forward by adding these variables to my main.tf.

  tectonic_admin_email    = "me@myemail"
  tectonic_admin_password = "mypass"

Now I'm getting these errors when running terraform plan:

Warning: output "etcd_sg_id": must use splat syntax to access aws_security_group.etcd attribute "id", because it has "count" set; use aws_security_group.etcd.*.id to obtain a list of the attributes across all instances

Warning: output "aws_api_external_dns_name": must use splat syntax to access aws_elb.api_external attribute "dns_name", because it has "count" set; use aws_elb.api_external.*.dns_name to obtain a list of the attributes across all instances

Warning: output "aws_elb_api_external_zone_id": must use splat syntax to access aws_elb.api_external attribute "zone_id", because it has "count" set; use aws_elb.api_external.*.zone_id to obtain a list of the attributes across all instances

Warning: output "aws_api_internal_dns_name": must use splat syntax to access aws_elb.api_internal attribute "dns_name", because it has "count" set; use aws_elb.api_internal.*.dns_name to obtain a list of the attributes across all instances

Warning: output "aws_elb_api_internal_zone_id": must use splat syntax to access aws_elb.api_internal attribute "zone_id", because it has "count" set; use aws_elb.api_internal.*.zone_id to obtain a list of the attributes across all instances

Error: Error refreshing state: 1 error(s) occurred:

* module.kubernetes.module.dns.data.aws_route53_zone.tectonic: 1 error(s) occurred:

* module.kubernetes.module.dns.data.aws_route53_zone.tectonic: data.aws_route53_zone.tectonic: no matching Route53Zone found