Closed gawertm closed 1 year ago
kube-node-agent
is deprecated, so we need to rely on Teleport SSH server.
Below is the conversation with Teleport engineer.
Tasks details mentioned in this document
nice, maybe you can walk the team through it in tomorrows refinement?
yeah, sounds good.
Teleport cluster (standalone) for PoC - https://teleport.demo.gaws.gigantic.io (GitHub SSO enabled)
Teleport operator v0.0.0 deployed on golem for testing via opsctl deploy, it connects to demo teleport cluster on golem.
If I want to login with Github, it asks me to authorize "tuladhar"
also there is this warning: "Teleport v13.1.0 is now available, please consider upgrading your Cluster." any reason we didnt test with latest teleport version?
also there is this warning: "Teleport v13.1.0 is now available, please consider upgrading your Cluster." any reason we didn't test with latest teleport version?
Wow, Teleport v13.1.0 was just released few days ago. I'll test it and upgrade.
@gawertm Authorized issue fixed now.
Upgraded to Teleport v13.1.0
side note on join token for ssh. with flatcar, we might need to do it with ignition
As discussed in the standup, here’s a rough diagram around teleport-kube-agent-app
deployment architecture. Initially, for Kubernetes API access, the plan was on having teleport-kube-agent-app
in default-apps
collection, but with this approach, we saw a challenge of injecting teleport join token as secret on workload cluster specially. As such, to get around this, we need to deploy teleport-kube-agent-app
from the teleport-operator
which is running in management cluster, similar to I believe how we do it with dex app.
Finally, tsh ssh is now working in Flatcar! :tada: — I managed to add some more labels, so it can scale to multiple clusters nodes ssh login.
You can try yourself, by creating a new workload cluster in golem, using cluster-aws version 0.36.0-4cb9a4faa5f1cec109f3a146371a37388ab03eb5
from test catalog
In example, below, I have workload cluster named tuladhar
Listing nodes registered to teleport for tuladhar workload cluster
~> tsh ls cluster=tuladhar
Node Name Address Labels
--------------- ---------- ---------------------------------------------------------------------------------------------------------
ip-10-0-123-255 ⟵ Tunnel arch=x86_64,baseDomain=gaws.gigantic.io,cluster=tuladhar,mc=golem,node=ip-10-0-123-255,role=control-plane
ip-10-0-130-47 ⟵ Tunnel arch=x86_64,baseDomain=gaws.gigantic.io,cluster=tuladhar,mc=golem,node=ip-10-0-130-47,role=control-plane
ip-10-0-240-159 ⟵ Tunnel arch=x86_64,baseDomain=gaws.gigantic.io,cluster=tuladhar,mc=golem,node=ip-10-0-240-159,role=control-plane
ip-10-0-86-35 ⟵ Tunnel arch=x86_64,baseDomain=gaws.gigantic.io,cluster=tuladhar,mc=golem,node=ip-10-0-86-35,role=worker
Login to one of the nodes of tuladhar workload cluster
> tsh ssh giantswarm@node=ip-10-0-123-255,cluster=tuladhar,mc=golem
Update Strategy: No Reboots
giantswarm@ip-10-0-123-255 ~ $ id
uid=1000(giantswarm) gid=1000(giantswarm) groups=1000(giantswarm),150(sudo) context=system_u:system_r:kernel_t:s0
giantswarm@ip-10-0-123-255 ~ $ hostname
ip-10-0-123-255
giantswarm@ip-10-0-123-255 ~ $
And similarly, Kubernetes access via Teleport is also working.
List Kubernetes cluster registered to Teleport
Kube Cluster Name Labels Selected
----------------- ------ --------
golem
golem-demo
golem-tuladhar
Login to golem MC cluster
Logged into Kubernetes cluster "golem". Start the local proxy:
tsh proxy kube -p 8443
Use the kubeconfig provided by the local proxy, and try 'kubectl version' to test the connection.
Use tsh kubectl
to interact with the cluster.
NAME STATUS ROLES AGE VERSION
ip-10-0-116-236.eu-west-2.compute.internal Ready worker 19d v1.24.10
ip-10-0-150-155.eu-west-2.compute.internal Ready control-plane,master 4d23h v1.24.10
ip-10-0-158-95.eu-west-2.compute.internal Ready worker 18d v1.24.10
ip-10-0-187-106.eu-west-2.compute.internal Ready worker 18d v1.24.10
ip-10-0-188-134.eu-west-2.compute.internal Ready worker 19d v1.24.10
ip-10-0-221-215.eu-west-2.compute.internal Ready control-plane,master 4d23h v1.24.10
ip-10-0-237-205.eu-west-2.compute.internal Ready worker 18d v1.24.10
ip-10-0-245-22.eu-west-2.compute.internal Ready worker 18d v1.24.10
ip-10-0-68-122.eu-west-2.compute.internal Ready control-plane,master 4d23h v1.24.10
UI Access You can access the Teleport cluster via WebUI: test.teleport.giantswarm.io via GitHub integration.
Terminal Access
You can access the Teleport via CLI using tsh
client.
tsh
clienttsh login --proxy test.teleport.giantswarm.io --auth github
teleport-operator
deployed on MC which authenticated to teleport cluster via long-lived identity certificate file generated the short-lived tokens for registering nodes/cluster to teleport. teleport-operator
will be deployed via MC bootstrap where identity file is supplied from LastPass. use tokens
👉 there are two ways tokens are used. 1) SSH - teleport-operator
creates a secret called $CLUSTER-teleport-join-token
in cluster namespace by re-conciling Cluster CRs, then cluster-aws
uses the secret to install teleport daemon and join the node for SSH access. 2) Kubernetes - teleport-operator
creates short-lived token on demand and deploys teleport-kube-agent-app
on management cluster as well as workload clusters by reconciling Cluster CRs.this looks like an amazing milestone towards teleport. I also checked the UI and on top I can see a drop-down for the cluster. Is this where we can potentially switch to other teleport "leaf servers" e.g. for customers that want their own instance? or how would we handle this?
Yes @gawertm, in the drop-down for the cluster is where we can switch to other teleport "leaf servers" clusters.