gitpod-io / gitpod

The developer platform for on-demand cloud development environments to create software faster and more securely.
https://www.gitpod.io
GNU Affero General Public License v3.0
12.99k stars 1.24k forks source link

[gitpod self-hosted] Could not find any VPCs matching certain failter #12134

Closed nevotheless closed 2 years ago

nevotheless commented 2 years ago

Bug description

Howdy, i'm currently trying to deploy gitpod via the gitpod-eks-guide but i'm running into several issues. I already asked in the discord about them but i haven't gotten any replies hence i'm posting in here.

This is a follow up of #12133

After doing make install again after the issue referenced in #12133 the installer creates the nodegroups services and workspaces.

After a while i'm greeting by this

2022-08-15 11:22:24 [✔]  created 2 managed nodegroup(s) in cluster "gitpod"
2022-08-15 11:22:27 [ℹ]  checking security group configuration for all nodegroups
2022-08-15 11:22:27 [ℹ]  all nodegroups have up-to-date cloudformation templates
[Error at /Setup] Could not find any VPCs matching {"account":"removed","region":"eu-central-1","filter":{"tag:Name":"eksctl-gitpod-cluster/VPC","isDefault":"false"},"returnAsymmetricSubnets":true}
[Error at /Services] Could not find any VPCs matching {"account":"removed","region":"eu-central-1","filter":{"tag:Name":"eksctl-gitpod-cluster/VPC","isDefault":"false"},"returnAsymmetricSubnets":true}
[Warning at /Services/RDS/Gitpod/SecurityGroup] Ignoring Egress rule since 'allowAllOutbound' is set to true; To add customize rules, set allowAllOutbound=false on the SecurityGroup

NOTICES

19836   AWS CDK v1 has entered maintenance mode

        Overview: AWS CDK v1 has entered maintenance mode on June 1, 2022.
                  Migrate to AWS CDK v2 to continue to get the latest features
                  and fixes!

        Affected versions: framework: 1.*, cli: 1.*

        More information at: https://github.com/aws/aws-cdk/issues/19836

If you don’t want to see a notice anymore, use "cdk acknowledge <id>". For example, "cdk acknowledge 19836".
Found errors
make: *** [install] Error 1

As you can see below i've configured the eks-cluster.yml file so that it should work with our existing vpc instead of creating a new one. The error seems to indicate that it expects a new vpc instead of the existing one.

Steps to reproduce

Our eks-cluster.yml (pretty much default besides the vpc config is changed according the existing vpc example directly from the eksctl repo)

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  # Template, please change
  # Please make sure you also update the definition of the variable
  # CLUSTERNAME=<cluster name> in the overrideBootstrapCommand section
  # and k8s.io/cluster-autoscaler/<cluster name>: "owned"
  # cluster-autoscaler will not be require additional labels in a future release.
  # https://github.com/kubernetes/autoscaler/pull/3968
  name: gitpod
  # Template, please change
  region: eu-central-1
  version: "1.21"

iam:
  withOIDC: true

  serviceAccounts:
    - metadata:
        name: aws-load-balancer-controller
        namespace: kube-system
      wellKnownPolicies:
        awsLoadBalancerController: true
    - metadata:
        name: ebs-csi-controller-sa
        namespace: kube-system
      wellKnownPolicies:
        ebsCSIController: true
    - metadata:
        name: cluster-autoscaler
        namespace: kube-system
      wellKnownPolicies:
        autoScaler: true

# By default we create a dedicated VPC for the cluster
# You can use an existing VPC by supplying private and/or public subnets. Please check
# https://eksctl.io/usage/vpc-networking/#use-existing-vpc-other-custom-configuration
# vpc:
#   autoAllocateIPv6: false
#   nat:
#     # For production environments user HighlyAvailable
#     # https://eksctl.io/usage/vpc-networking/#nat-gateway
#     gateway: Single

vpc:
  id: "vpc-vpcid"  # (optional, must match VPC ID used for each subnet below)
  subnets:
    # must provide 'private' and/or 'public' subnets by availability zone as shown
    private:
      eu-central-1c:
        id: "subnet-asdfe"

      eu-central-1b:
        id: "subnet-asdff"

      eu-central-1a:
        id: "subnet-asdf"

# Enable EKS control plane logging
# https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html
cloudWatch:
  clusterLogging:
    enableTypes: ["audit", "authenticator"]

privateCluster:
  enabled: false
  additionalEndpointServices:
    - "autoscaling"
    - "logs"

managedNodeGroups:
  - name: workspaces
    desiredCapacity: 1
    minSize: 1
    maxSize: 10
    # because of AWS addons
    disableIMDSv1: false
    # Please configure the size of the volume and additional features
    # https://eksctl.io/usage/schema/#nodeGroups-volumeType
    # https://aws.amazon.com/es/ebs/pricing/
    volumeSize: 300
    volumeType: gp3
    volumeIOPS: 6000
    volumeThroughput: 500
    ebsOptimized: true
    # Use private subnets for nodes
    # https://eksctl.io/usage/vpc-networking/#use-private-subnets-for-initial-nodegroup
    privateNetworking: true
    ami: ami-04a8127c830f27712

    tags:
      # EC2 tags required for cluster-autoscaler auto-discovery
      k8s.io/cluster-autoscaler/enabled: "true"
      k8s.io/cluster-autoscaler/gitpod: "owned"
    iam:
      attachPolicyARNs: &attachPolicyARNs
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
        - arn:aws:iam::aws:policy/ElasticLoadBalancingFullAccess
        - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
      withAddonPolicies: &withAddonPolicies
        albIngress: true
        autoScaler: true
        cloudWatch: true
        certManager: true
        ebs: true
    # Using custom AMI images require the definition of overrideBootstrapCommand
    # to ensure that nodes are able to join the cluster https://eksctl.io/usage/custom-ami-support/
    overrideBootstrapCommand: |
      #!/bin/bash

      export CLUSTERNAME=gitpod
      export NODEGROUP=workspaces

      declare -a LABELS=(
        eks.amazonaws.com/nodegroup="${NODEGROUP}"
        gitpod.io/workload_workspace_services=true
        gitpod.io/workload_workspace_regular=true
        gitpod.io/workload_workspace_headless=true
      )

      export KUBELET_EXTRA_ARGS="$(printf -- "--max-pods=110 --node-labels=%s" $(IFS=$','; echo "${LABELS[*]}"))"
      /etc/eks/bootstrap.sh ${CLUSTERNAME}

    spot: false
    # https://eksctl.io/usage/instance-selector/
    #instanceSelector:
    #  vCPUs: 8
    #  memory: 64Gib
    # or use a custom list
    instanceTypes: ["m6i.xlarge", "m6i.2xlarge"]

  - name: services
    desiredCapacity: 1
    minSize: 1
    maxSize: 3
    # because of AWS addons
    disableIMDSv1: false
    # Please configure the size of the volume and additional features
    # https://eksctl.io/usage/schema/#nodeGroups-volumeType
    # https://aws.amazon.com/es/ebs/pricing/
    volumeSize: 100
    volumeType: gp3
    volumeIOPS: 6000
    volumeThroughput: 500
    ebsOptimized: true
    # Use private subnets for nodes
    # https://eksctl.io/usage/vpc-networking/#use-private-subnets-for-initial-nodegroup
    privateNetworking: true
    ami: ami-04a8127c830f27712

    tags:
      # EC2 tags required for cluster-autoscaler auto-discovery
      k8s.io/cluster-autoscaler/enabled: "true"
      k8s.io/cluster-autoscaler/gitpod: "owned"
    iam:
      attachPolicyARNs: *attachPolicyARNs
      withAddonPolicies: *withAddonPolicies
    # Using custom AMI images require the definition of overrideBootstrapCommand
    # to ensure that nodes are able to join the cluster https://eksctl.io/usage/custom-ami-support/
    overrideBootstrapCommand: |
      #!/bin/bash

      export CLUSTERNAME=gitpod
      export NODEGROUP=services

      declare -a LABELS=(
        eks.amazonaws.com/nodegroup="${NODEGROUP}"
        gitpod.io/workload_meta=true
        gitpod.io/workload_ide=true
      )

      export KUBELET_EXTRA_ARGS="$(printf -- "--max-pods=110 --node-labels=%s" $(IFS=$','; echo "${LABELS[*]}"))"
      /etc/eks/bootstrap.sh ${CLUSTERNAME}

    spot: false
    # https://eksctl.io/usage/instance-selector/
    #instanceSelector:
    #  vCPUs: 4
    #  memory: 16Gib
    # or use a custom list
    instanceTypes: ["m6i.xlarge", "m6i.2xlarge"]

Our .env

# Base domain
DOMAIN=gitpod.aws.intra.company.de

# AWS Certificate Manager certificate
# Setting this value implies TLS termination in the load balancer
CERTIFICATE_ARN=ourcertarn

# The AWS credentials profile name (optional)
# Leave empty or remove if you only set up the default one
AWS_PROFILE=

# The Route53 Zone ID (optional)
# If the DNS domain is managed by and you want to enable external-dns, please set the route53 zone ID
# This enables the update of the DNS records required to get gitpod running using the Ingress rule
# definition as the source of truth.
ROUTE53_ZONEID=somezoneid

# The name of the S3 bucket where the container images that gitpod creates are stored
# If there is no value we create a new bucket with the name "container-registry-<cluster name>-<account ID>"
CONTAINER_REGISTRY_BUCKET=

# The path to the file containing the credentials to pull images from private container registries.
# https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
IMAGE_PULL_SECRET_FILE=/Users/tim/.docker/properConfig.json

# List of registries (hostnames) that users get access to by default allowed to be used in base images.
# Default: only images from docker.io
IMAGE_REGISTRY_WHITELIST=docker.io,registry.gitpod.io,dockerreg.intra.company.de

# Allow to define internal or internet-facing ALB for gitpod proxy component.
# https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/ingress/annotations/#scheme
# https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/how-elastic-load-balancing-works.html#load-balancer-scheme
USE_INTERNAL_ALB=false

# Configure custom Availability Zone/s that ALB will route traffic to.
# https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/ingress/annotations/#subnets
# Default: use auto discovery (empty)
ALB_SUBNETS=

Running make install

Workspace affected

No response

Expected behavior

No response

Example repository

No response

Anything else?

No response

corneliusludmann commented 2 years ago

see also this comment: https://github.com/gitpod-io/gitpod/issues/12133#issuecomment-1214946524