gravitational / teleport

The easiest, and most secure way to access and protect all of your infrastructure.
https://goteleport.com
GNU Affero General Public License v3.0
17.64k stars 1.76k forks source link

Discovery Service regression after upgraded to 16.2.x and above #46877

Closed tunguyen9889 closed 1 month ago

tunguyen9889 commented 1 month ago

Expected behavior:

Current behavior:

{"timestamp":"2024-09-23T21:02:35Z","level":"info","caller":"service/service.go:2692","message":"Successfully registered instance client.","component":"proc:1","pid":"7.1","component":"instance:1"}
{"timestamp":"2024-09-23T21:02:35Z","level":"info","caller":"service/service.go:3117","message":"starting upload completer service","component":"proc:1","pid":"7.1","component":"upload:1"}
{"timestamp":"2024-09-23T21:02:35Z","level":"info","caller":"service/service.go:3133","message":"Creating directory.","component":"proc:1","pid":"7.1","component":"upload:1","directory":"/var/lib/teleport/log"}
{"timestamp":"2024-09-23T21:02:35Z","level":"info","caller":"service/service.go:3133","message":"Creating directory.","component":"proc:1","pid":"7.1","component":"upload:1","directory":"/var/lib/teleport/log/upload"}
{"timestamp":"2024-09-23T21:02:35Z","level":"info","caller":"service/service.go:3133","message":"Creating directory.","component":"proc:1","pid":"7.1","component":"upload:1","directory":"/var/lib/teleport/log/upload/streaming"}
{"timestamp":"2024-09-23T21:02:35Z","level":"info","caller":"service/service.go:3133","message":"Creating directory.","component":"proc:1","pid":"7.1","component":"upload:1","directory":"/var/lib/teleport/log/upload/streaming/default"}
{"timestamp":"2024-09-23T21:02:35Z","level":"info","caller":"service/service.go:3133","message":"Creating directory.","component":"proc:1","pid":"7.1","component":"upload:1","directory":"/var/lib/teleport/log"}
{"timestamp":"2024-09-23T21:02:35Z","level":"info","caller":"service/service.go:3133","message":"Creating directory.","component":"proc:1","pid":"7.1","component":"upload:1","directory":"/var/lib/teleport/log/upload"}
{"timestamp":"2024-09-23T21:02:35Z","level":"info","caller":"service/service.go:3133","message":"Creating directory.","component":"proc:1","pid":"7.1","component":"upload:1","directory":"/var/lib/teleport/log/upload/corrupted"}
{"timestamp":"2024-09-23T21:02:35Z","level":"info","caller":"service/service.go:3133","message":"Creating directory.","component":"proc:1","pid":"7.1","component":"upload:1","directory":"/var/lib/teleport/log/upload/corrupted/default"}
{"caller":"filesessions/fileasync.go:196","component":"upload","level":"info","message":"uploader will scan /var/lib/teleport/log/upload/streaming/default every 5s","timestamp":"2024-09-23T21:02:35Z"}
{"caller":"events/complete.go:167","component":"upload:1:completer","level":"info","message":"upload completer will run every 5m0s","timestamp":"2024-09-23T21:02:35Z"}
{"timestamp":"2024-09-23T21:02:48Z","level":"info","caller":"service/connect.go:471","message":"Joining the cluster with a secure token.","component":"proc:1","pid":"7.1"}
{"caller":"join/join.go:284","component":null,"level":"info","message":"Attempting registration via proxy server.","timestamp":"2024-09-23T21:02:48Z"}
{"caller":"join/join.go:291","component":null,"level":"info","message":"Successfully registered via proxy server.","timestamp":"2024-09-23T21:02:48Z"}
{"timestamp":"2024-09-23T21:02:48Z","level":"info","caller":"service/connect.go:531","message":"Successfully obtained credentials to connect to the cluster.","component":"proc:1","pid":"7.1","identity":"Discovery"}
{"timestamp":"2024-09-23T21:02:48Z","level":"info","caller":"service/connect.go:1118","message":"Reusing Instance client.","component":"proc:1","pid":"7.1","identity":"Discovery","additional_system_roles":["App","Discovery","Db","Kube","Node","WindowsDesktop"]}
{"timestamp":"2024-09-23T21:02:48Z","level":"info","caller":"service/connect.go:566","message":"The process successfully wrote the credentials and state to the disk.","component":"proc:1","pid":"7.1","identity":"Discovery"}
{"caller":"cache/cache.go:1022","component":"discovery:service:1:cache","level":"info","message":"Cache \"discovery\" first init succeeded.","timestamp":"2024-09-23T21:02:48Z"}
{"timestamp":"2024-09-23T21:02:48Z","level":"info","caller":"service/connect.go:700","message":"The new service has started successfully. Starting syncing rotation status.","component":"proc:1","pid":"7.1","max_retry_period":256000000000}
{"caller":"common/watcher.go:116","component":"discovery:service","kind":"kube_cluster","level":"info","message":"Starting watcher.","pid":"7.1","timestamp":"2024-09-23T21:02:48Z"}
{"timestamp":"2024-09-23T21:02:48Z","level":"info","caller":"service/discovery.go:129","message":"Discovery service has successfully started","component":"proc:1","pid":"7.1","component":"discovery:service:1"}
{"caller":"common/watcher.go:116","component":"discovery:service","kind":"kube_cluster","level":"info","message":"Starting watcher.","pid":"7.1","timestamp":"2024-09-23T21:02:48Z"}
{"caller":"common/watcher.go:116","component":"discovery:service","kind":"db","level":"info","message":"Starting watcher.","pid":"7.1","timestamp":"2024-09-23T21:02:48Z"}
{"caller":"discovery/status.go:61","component":"discovery:service","discovery_config_name":"","error":"discovery config \"\" not found","level":"info","message":"Error updating discovery config status","pid":"7.1","timestamp":"2024-09-23T21:02:48Z"}
{"caller":"discovery/status.go:61","component":"discovery:service","discovery_config_name":"","error":"discovery config \"\" not found","level":"info","message":"Error updating discovery config status","pid":"7.1","timestamp":"2024-09-23T21:02:48Z"}
{"caller":"discovery/status.go:61","component":"discovery:service","discovery_config_name":"","error":"discovery config \"\" not found","level":"info","message":"Error updating discovery config status","pid":"7.1","timestamp":"2024-09-23T21:02:48Z"}

Bug details:

apiVersion: v1
data:
  teleport.yaml: |-
    "app_service":
      "enabled": false
    "auth_service":
      "enabled": false
    "db_service":
      "enabled": false
    "discovery_service":
      "aws":
      - "install":
          "script_name": "linux-installer-fips"
        "regions":
        - "us-west-2"
        "ssm":
          "document_name": "TeleportDiscoveryInstaller"
        "tags":
          "os": "linux"
          "teleport-discovery": "True"
        "types":
        - "ec2"
      "discovery_group": "agent-install"
      "enabled": true
    "kubernetes_service":
      "enabled": false
    "proxy_service":
      "enabled": false
    "ssh_service":
      "enabled": false
    "teleport":
      "auth_token": "/var/lib/join-token/token"
      "log":
        "format":
          "extra_fields":
          - "timestamp"
          - "level"
          - "component"
          - "caller"
          "output": "json"
        "output": "stderr"
        "severity": "INFO"
      "proxy_server": "teleport.<proxy_server>:443"
    "tracing_service":
      "enabled": true
      "exporter_url": "grpc://otel-collector.tempo.svc.cluster.local:4317"
      "sampling_rate_per_million": 100000
    "version": "v3"
    "windows_desktop_service":
      "enabled": false
kind: ConfigMap
metadata:
  name: teleport-cluster-discovery
  namespace: teleport
{"caller":"cache/cache.go:1022","component":"discovery:service:1:cache","level":"info","message":"Cache \"discovery\" first init succeeded.","timestamp":"2024-09-23T21:07:00Z"}
{"timestamp":"2024-09-23T21:07:00Z","level":"debug","caller":"service/discovery.go:175","message":"Access graph is disabled or not configured. Falling back to the Auth server's access graph configuration.","component":"proc:1","pid":"8.1","component":"discovery:service:1"}
{"caller":"cloud/clients.go:807","component":null,"level":"debug","message":"Initializing AWS session for region xx-xxxx-xx using environment credentials.","timestamp":"2024-09-23T21:07:01Z"}
{"caller":"service/supervisor.go:224","component":"proc:1","level":"debug","message":"Adding service to supervisor.","pid":"8.1","service":"discovery.stop","timestamp":"2024-09-23T21:07:01Z"}
{"caller":"service/supervisor.go:426","component":"proc:1","event":"DiscoveryReady","level":"debug","message":"Broadcasting event.","pid":"8.1","timestamp":"2024-09-23T21:07:01Z"}
{"caller":"service/supervisor.go:451","component":"proc:1","in":"DiscoveryReady","level":"debug","message":"Broadcasting mapped event.","out":"EventMapping(in=[InstanceReady TracingReady DiscoveryReady], out=TeleportReady)","pid":"8.1","timestamp":"2024-09-23T21:07:01Z"}
{"caller":"services/watcher.go:249","component":"discovery:service","level":"debug","message":"Starting watch.","pid":"8.1","resource-kind":"node","timestamp":"2024-09-23T21:07:01Z"}
{"timestamp":"2024-09-23T21:07:01Z","level":"info","caller":"service/connect.go:700","message":"The new service has started successfully. Starting syncing rotation status.","component":"proc:1","pid":"8.1","max_retry_period":256000000000}
{"caller":"common/watcher.go:116","component":"discovery:service","kind":"kube_cluster","level":"info","message":"Starting watcher.","pid":"8.1","timestamp":"2024-09-23T21:07:01Z"}
{"caller":"services/reconciler.go:113","component":"discovery:service","kind":"kube_cluster","level":"debug","message":"Reconciling 0 current resources with 0 new resources.","pid":"8.1","timestamp":"2024-09-23T21:07:01Z"}
{"caller":"service/supervisor.go:312","component":"proc:1","level":"debug","message":"Service has started.","pid":"8.1","service":"discovery.stop","timestamp":"2024-09-23T21:07:01Z"}
{"caller":"automaticupgrades/channel.go:60","component":null,"level":"debug","message":"'default' automatic update channel manually specified, honoring it.","timestamp":"2024-09-23T21:07:01Z"}
{"timestamp":"2024-09-23T21:07:01Z","level":"info","caller":"service/discovery.go:129","message":"Discovery service has successfully started","component":"proc:1","pid":"8.1","component":"discovery:service:1"}
{"caller":"common/watcher.go:116","component":"discovery:service","kind":"kube_cluster","level":"info","message":"Starting watcher.","pid":"8.1","timestamp":"2024-09-23T21:07:01Z"}
{"timestamp":"2024-09-23T21:07:01Z","level":"debug","caller":"service/state.go:118","message":"Teleport component has started.","component":"proc:1","pid":"8.1","component":"discovery:service"}
{"caller":"common/watcher.go:116","component":"discovery:service","kind":"db","level":"info","message":"Starting watcher.","pid":"8.1","timestamp":"2024-09-23T21:07:01Z"}
{"caller":"services/reconciler.go:113","component":"discovery:service","kind":"db","level":"debug","message":"Reconciling 0 current resources with 0 new resources.","pid":"8.1","timestamp":"2024-09-23T21:07:01Z"}
{"caller":"discovery/discovery.go:1062","component":"discovery:service","level":"debug","message":"EC2 instances discovered (AccountID: xxxxxxxxxxxxx, Instances: [i-xxxxxxxxxxxxx]), starting installation","pid":"8.1","timestamp":"2024-09-23T21:07:01Z"}
{"caller":"discovery/discovery.go:973","component":"discovery:service","level":"debug","message":"All discovered EC2 instances are already part of the cluster.","pid":"8.1","timestamp":"2024-09-23T21:07:01Z"}
{"caller":"discovery/status.go:61","component":"discovery:service","discovery_config_name":"","error":"discovery config \"\" not found","level":"info","message":"Error updating discovery config status","pid":"8.1","timestamp":"2024-09-23T21:07:01Z"}
{"caller":"discovery/discovery.go:1062","component":"discovery:service","level":"debug","message":"EC2 instances discovered (AccountID: xxxxxxxxxxxxx, Instances: [i-xxxxxxxxxxxxx]), starting installation","pid":"8.1","timestamp":"2024-09-23T21:07:01Z"}
{"caller":"discovery/discovery.go:973","component":"discovery:service","level":"debug","message":"All discovered EC2 instances are already part of the cluster.","pid":"8.1","timestamp":"2024-09-23T21:07:01Z"}
{"caller":"discovery/status.go:61","component":"discovery:service","discovery_config_name":"","error":"discovery config \"\" not found","level":"info","message":"Error updating discovery config status","pid":"8.1","timestamp":"2024-09-23T21:07:01Z"}
{"caller":"discovery/discovery.go:1062","component":"discovery:service","level":"debug","message":"EC2 instances discovered (AccountID: xxxxxxxxxxxxx, Instances: [i-xxxxxxxxxxxxx]), starting installation","pid":"8.1","timestamp":"2024-09-23T21:07:01Z"}
ogorbachov commented 1 month ago

Hi @tunguyen9889 I am fighting with Teleport discovery service with fips binaries and I noticed that you created new installer named linux-installer-fips. Right now even after 2 modifications my SSM run cannot deploy teleport fips binaries. Teleport support said that it is a bug. Could you please give some hints how you implemented fips installation?

tunguyen9889 commented 1 month ago

@ogorbachov what I have done is creating a new installer script called linux-installer-fips as following:

---
version: v1
kind: installer
metadata:
  name: linux-installer-fips
spec:
  script: |
    #!/usr/bin/env bash
    # shellcheck disable=SC1083,SC2215,SC2288 # caused by Go templating, and shellcheck won't parse if the lines are excluded individually

    set -eu

    upgrade_endpoint="{{ .PublicProxyAddr }}/v1/webapi/automaticupgrades/channel/default"

    # upgrade_endpoint_fetch loads the specified value from the upgrade endpoint. the only
    # currently supported values are 'version' and 'critical'.
    upgrade_endpoint_fetch() {
      host_path="${upgrade_endpoint}/${1}"

      if sf_output="$(curl --proto '=https' --tlsv1.2 -sSf "https://${host_path}")"; then
        # emit output with empty lines and extra whitespace removed
        echo "$sf_output" | grep -v -e '^[[:space:]]*$' | awk '{$1=$1};1'
        return 0
      else
        return 1
      fi
    }

    # get_target_version loads the current value of the /version endpoint.
    get_target_version() {
      if tv_output="$(upgrade_endpoint_fetch version)"; then
        # emit version string with leading 'v' removed if one is present
        echo "${tv_output#v}"
        return 0
      fi
      return 1
    }

    on_ec2() {
      IMDS_TOKEN=$(curl -m5 -sS -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 300")
      [ -z "$IMDS_TOKEN" ] && return 1
      EC2_STATUS=$(curl -o /dev/null -w "%{http_code}" -m5 -sS -H "X-aws-ec2-metadata-token: ${IMDS_TOKEN}" "http://169.254.169.254/latest/meta-data")
      [ "$EC2_STATUS" = "200" ]
    }

    on_azure() {
      AZURE_STATUS=$(curl -o /dev/null -w "%{http_code}" -m5 -sS -H "Metadata: true" --noproxy "*" "http://169.254.169.254/metadata/instance?api-version=2021-02-01")
      [ "$AZURE_STATUS" = "200" ]
    }

    on_gcp() {
      GCP_STATUS=$(curl -o /dev/null -w "%{http_code}" -m5 -sS -H "Metadata-Flavor: Google" "http://metadata.google.internal/")
      [ "$GCP_STATUS" = "200" ]
    }

    (
      flock -n 9 || exit 1
      if test -f /usr/local/bin/teleport; then
        exit 0
      fi
      # shellcheck disable=SC1091
      . /etc/os-release

      TELEPORT_PACKAGE="{{ .TeleportPackage }}-fips"
      TELEPORT_UPDATER_PACKAGE="{{ .TeleportPackage }}-updater"

      if [ "$ID" = "debian" ] || [ "$ID" = "ubuntu" ]; then
        # old versions of ubuntu require that keys get added by `apt-key add`, without
        # adding the key apt shows a key signing error when installing teleport.
        if [ "$VERSION_CODENAME" = "xenial" ] || [ "$VERSION_CODENAME" = "trusty" ]; then
          curl -o /tmp/teleport-pubkey.asc https://apt.releases.teleport.dev/gpg
          sudo apt-key add /tmp/teleport-pubkey.asc
          echo "deb https://apt.releases.teleport.dev/ubuntu ${VERSION_CODENAME?} {{ .RepoChannel }}" | sudo tee /etc/apt/sources.list.d/teleport.list
          rm /tmp/teleport-pubkey.asc
        else
          sudo curl https://apt.releases.teleport.dev/gpg \
            -o /usr/share/keyrings/teleport-archive-keyring.asc
          echo "deb [signed-by=/usr/share/keyrings/teleport-archive-keyring.asc]  https://apt.releases.teleport.dev/${ID?} ${VERSION_CODENAME?} {{ .RepoChannel }}" | sudo tee /etc/apt/sources.list.d/teleport.list >/dev/null
        fi
        sudo apt-get update

        # shellcheck disable=SC2050
        if [ "{{ .AutomaticUpgrades }}" = "true" ]; then
          # automatic upgrades
          if ! target_version="$(get_target_version)"; then
            # error getting the target version
            sudo DEBIAN_FRONTEND=noninteractive apt-get install -y "$TELEPORT_PACKAGE" jq "$TELEPORT_UPDATER_PACKAGE"
          elif [ "$target_version" == "none" ]; then
            # no target version advertised
            sudo DEBIAN_FRONTEND=noninteractive apt-get install -y "$TELEPORT_PACKAGE" jq "$TELEPORT_UPDATER_PACKAGE"
          else
            # successfully retrieved target version
            sudo DEBIAN_FRONTEND=noninteractive apt-get install -y "$TELEPORT_PACKAGE=$target_version" jq "$TELEPORT_UPDATER_PACKAGE=$target_version"
          fi
        else
          # no automatic upgrades
          sudo apt-get install -y "$TELEPORT_PACKAGE" jq
        fi

      elif [ "$ID" = "amzn" ] || [ "$ID" = "rhel" ]; then
        if [ "$ID" = "rhel" ]; then
          VERSION_ID=${VERSION_ID//\.*/} # convert version numbers like '7.2' to only include the major version
        fi
        sudo yum install -y yum-utils
        sudo yum-config-manager --add-repo \
          "$(rpm --eval "https://yum.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/{{ .RepoChannel }}/teleport.repo")"

        # shellcheck disable=SC2050
        if [ "{{ .AutomaticUpgrades }}" = "true" ]; then
          # automatic upgrades
          if ! target_version="$(get_target_version)"; then
            # error getting the target version
            sudo yum install -y "$TELEPORT_PACKAGE" jq "$TELEPORT_UPDATER_PACKAGE"
          elif [ "$target_version" == "none" ]; then
            # no target version advertised
            sudo yum install -y "$TELEPORT_PACKAGE" jq "$TELEPORT_UPDATER_PACKAGE"
          else
            # successfully retrieved target version
            sudo yum install -y "$TELEPORT_PACKAGE-$target_version" jq "$TELEPORT_UPDATER_PACKAGE-$target_version"
          fi
        else
          # no automatic upgrades
          sudo yum install -y "$TELEPORT_PACKAGE" jq
        fi

      elif [ "$ID" = "sles" ] || [ "$ID" = "opensuse-tumbleweed" ] || [ "$ID" = "opensuse-leap" ]; then
        if [ "$ID" = "opensuse-tumbleweed" ]; then
          VERSION_ID="15" # tumbleweed uses dated VERSION_IDs like 20230702
        else
          VERSION_ID="${VERSION_ID//.*/}" # convert version numbers like '7.2' to only include the major version
        fi
        sudo rpm --import "https://zypper.releases.teleport.dev/gpg"
        sudo zypper --non-interactive addrepo "$(rpm --eval "https://zypper.releases.teleport.dev/sles/$VERSION_ID/Teleport/%{_arch}/{{ .RepoChannel }}/teleport.repo")"
        sudo zypper --gpg-auto-import-keys refresh

        # shellcheck disable=SC2050
        if [ "{{ .AutomaticUpgrades }}" = "true" ]; then
          # automatic upgrades
          if ! target_version="$(get_target_version)"; then
            # error getting the target version
            sudo zypper --non-interactive install -y "$TELEPORT_PACKAGE" jq "$TELEPORT_UPDATER_PACKAGE"
          elif [ "$target_version" == "none" ]; then
            # no target version advertised
            sudo zypper --non-interactive install -y "$TELEPORT_PACKAGE" jq "$TELEPORT_UPDATER_PACKAGE"
          else
            # successfully retrieved target version
            sudo zypper --non-interactive install -y "$TELEPORT_PACKAGE-$target_version" jq "$TELEPORT_UPDATER_PACKAGE-$target_version"
          fi
        else
          # no automatic upgrades
          sudo zypper --non-interactive install -y "$TELEPORT_PACKAGE" jq
        fi

      else
        echo "Unsupported distro: $ID"
        exit 1
      fi

      if on_azure; then
        API_VERSION=$(curl -m5 -sS -H "Metadata: true" --noproxy "*" "http://169.254.169.254/metadata/versions" | jq -r ".apiVersions[-1]")
        INSTANCE_INFO=$(curl -m5 -sS -H "Metadata: true" --noproxy "*" "http://169.254.169.254/metadata/instance?api-version=$API_VERSION&format=json")

        REGION="$(echo "$INSTANCE_INFO" | jq -r .compute.location)"
        RESOURCE_GROUP="$(echo "$INSTANCE_INFO" | jq -r .compute.resourceGroupName)"
        SUBSCRIPTION_ID="$(echo "$INSTANCE_INFO" | jq -r .compute.subscriptionId)"
        VM_ID="$(echo "$INSTANCE_INFO" | jq -r .compute.vmId)"

        JOIN_METHOD=azure
        LABELS="teleport.internal/vm-id=${VM_ID},teleport.internal/subscription-id=${SUBSCRIPTION_ID},teleport.internal/region=${REGION},teleport.internal/resource-group=${RESOURCE_GROUP}"

      elif on_ec2; then
        IMDS_TOKEN=$(curl -m5 -sS -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 300")
        INSTANCE_INFO=$(curl -m5 -sS -H "X-aws-ec2-metadata-token: ${IMDS_TOKEN}" "http://169.254.169.254/latest/dynamic/instance-identity/document")

        ACCOUNT_ID="$(echo "$INSTANCE_INFO" | jq -r .accountId)"
        INSTANCE_ID="$(echo "$INSTANCE_INFO" | jq -r .instanceId)"
        REGION="$(echo "$INSTANCE_INFO" | jq -r .region)"

        JOIN_METHOD=iam
        LABELS="teleport.dev/instance-id=${INSTANCE_ID},teleport.dev/account-id=${ACCOUNT_ID},teleport.dev/region=${REGION}"

      elif on_gcp; then
        NAME="$(curl -m5 -sS -H "Metadata-Flavor:Google" "http://metadata.google.internal/computeMetadata/v1/instance/name")"
        # GCP metadata returns fully qualified zone ("projects/<project-id>/zones/<zone>"), so we need to parse the zone name.
        FULL_ZONE="$(curl -m5 -sS -H "Metadata-Flavor:Google" "http://metadata.google.internal/computeMetadata/v1/instance/zone")"
        ZONE="$(basename $FULL_ZONE)"
        PROJECT_ID=$(curl -m5 -sS -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetadata/v1/project/project-id")

        JOIN_METHOD=gcp
        LABELS="teleport.internal/name=${NAME},teleport.internal/zone=${ZONE},teleport.internal/project-id=${PROJECT_ID}"

      else
        echo "Could not determine cloud provider"
        exit 1
      fi

      # generate /etc/default/teleport environment file
      echo 'ARGS="--fips --config=/etc/teleport.yaml --pid-file=/var/run/teleport.pid --diag-addr=0.0.0.0:3434"' | sudo tee /etc/default/teleport

      # generate /etc/systemd/system/teleport.service file
      echo '[Unit]
    Description=Teleport Service installed by Teleport EC2 Discovery Service
    After=network.target

    [Service]
    Type=simple
    Restart=on-failure
    EnvironmentFile=-/etc/default/teleport
    ExecStart=/usr/local/bin/teleport start $ARGS
    ExecReload=/bin/kill -HUP $MAINPID
    PIDFile=/run/teleport.pid
    LimitNOFILE=524288

    [Install]
    WantedBy=multi-user.target' | sudo tee /etc/systemd/system/teleport.service

      # generate teleport ssh config
      # token is read as a parameter from the AWS ssm script run and
      # passed as the first argument to the script
      sudo /usr/local/bin/teleport node configure \
        --proxy="{{ .PublicProxyAddr }}" \
        --join-method=${JOIN_METHOD} \
        {{- if .AzureClientID }}
        --azure-client-id="{{ .AzureClientID }}" \
        {{ end -}}
        --token="$1" \
        --output=file \
        --labels="${LABELS}"

      # change log output to json format
      sudo sed -i "s/      output: text/      output: json/" /etc/teleport.yaml

      # enable and start teleport service
      sudo systemctl enable --now teleport

    ) 9>/var/lock/teleport_install.lock

You can see I added -fips to TELEPORT_PACKAGE and created /etc/default/teleport with --fips in ARGS. You can create the installer using the command tctl create -f installer.yaml, then in your Discovery config, just update the script_name to the new installer script. Feel free to ping @James Nguyen in Teleport Community Slack if you want to discuss more.

ogorbachov commented 1 month ago

Thank you very much!

tunguyen9889 commented 1 month ago

47171 was created to quiet the logs.