Closed r0mant closed 1 day ago
listen_addr
):
hosts
section.static_hosts
section.static_hosts
section.non_ad_hosts
section.hosts
section.static_hosts
section.static_hosts
section.non_ad_hosts
section.windows_desktop_service
s to the same Teleport cluster,
verify that connections to desktops on different AD domains works. (Attempt to
connect several times to verify that you are routed to the correct
windows_desktop_service
)client_idle_timeout
to a small value and verify that idle sessions
are terminated (the session should end and an audit event will confirm it
was due to idle connection)teleport.dev/origin
label.teleport.dev
labels for OS, OS
Version, DNS hostname.static_hosts
are applied to correct desktopsdesktop_directory_sharing: false
) and confirm that the option to share a directory doesn't appear in the menumode: node-sync
or mode: proxy-sync
)mode: node
or mode: proxy
)windows.desktop.session.start
(TDP00I
) emitted on startwindows.desktop.session.start
(TDP00W
) emitted when session fails to
start (due to RBAC, or a desktop lock, for example)client.disconnect
(T3006I
) emitted when session is terminated by or fails
to start due to lockwindows.desktop.session.end
(TDP01I
) emitted on enddesktop.clipboard.send
(TDP02I
) emitted for local copy -> remote
pastedesktop.clipboard.receive
(TDP03I
) emitted for remote copy -> local
pastedesktop.directory.share
(TDP04I
) emitted when Teleport starts sharing a directorydesktop.directory.read
(TDP05I
) emitted when a file is read over the shared directorydesktop.directory.write
(TDP06I
) emitted when a file is written to over the shared directoryscreen_size
screen_size
in their spec always use the same screen size.screen_size
do not resize automatically.screen_size
dimension larger than 8192 fails.non_ad_hosts
section in config file and is visible in UIteleport.dev/ad: false
tctl
tctl get dynamic_windows_desktop
works with all supported formatswindows_desktop_services
s creates Windows desktops for each
matching WDSwindows_desktop_services
deletes
corresponding Windows desktopsVerify that our software runs on the minimum supported OS versions as per https://goteleport.com/docs/installation/#operating-system-support
tsh
runs on the minimum supported Windows versionAzure offers virtual machines with the Windows 10 2016 LTSB image. This image runs on Windows 10 rev. 1607, which is the exact minimum Windows version that we support.
tsh
runs on the minimum supported macOS versiontctl
runs on the minimum supported macOS versionteleport
runs on the minimum supported macOS versiontbot
runs on the minimum supported macOS versiontsh
runs on the minimum supported Linux versiontctl
runs on the minimum supported Linux versionteleport
runs on the minimum supported Linux versiontbot
runs on the minimum supported Linux versiontctl bots add robot --roles=access
. Follow the instructions provided in the output to start tbot
SIGUSR1
and SIGHUP
to a running tbot process causes a renewal and new certificates to be generatedWith an SSH node registered to the Teleport cluster:
ssh_config
in the destination directorytsh
with the identity file in the destination directoryWith a Postgres DB registered to the Teleport cluster:
tbot db connect
with a database outputtbot proxy db
with a database outputtbot proxy db --tunnel
with a database output and then able to connect to the database through the tunnel without credentialsWith a Kubernetes cluster registered to the Teleport cluster:
kubeconfig
produced by a Kubernetes output can be used to run basic commands (e.g kubectl get pods
)
With a HTTP application registered to the Teleport cluster:
curl --cert ./out/tlscert --key ./out/key https://httpbin.teleport.example.com/headers
)tbot proxy app httpbin
with an application output and then able to connect to the application through the tunnel without credentials curl localhost:port/headers
Host users creation docs Host users creation RFD
Host users are considered "managed" when they belong to one of the teleport system groups: teleport-system
, teleport-keep
. Users outside of these groups are considered "unmanaged". Any users in the teleport-static
group are
also managed, but not considered for role-based host user creation.
teleport-system
group when create_host_user_mode: "insecure-drop"
teleport-keep
group when create_host_user_mode: "keep"
host_groups
changes (additions and removals)create_host_user_mode: "off"
)teleport-system
are cleaned up after their session endsteleport-keep
are not cleaned up after their session endsteleport-keep
is not included in host_groups
teleport-keep
is included in host_groups
disable_create_host_user: true
stops user creation from occurringcreate_host_user_default_shell: <bash, zsh, fish, etc.>
should set the default shell for a newly created host user to the chosen shell (validated by confirming shell path has been written to the end of the user's record in /etc/passwd
)tctl get cert_authority
)
standby
phase: only active_keys
, no additional_trusted_keys
init
phase: active_keys
and additional_trusted_keys
update_clients
and update_servers
phases: the certs from the init
phase are swappedstandby
phase: only the new certs remain in active_keys
, nothing in additional_trusted_keys
rollback
phase (second pass, after completing a regular rotation): same content as in the init
phasestandby
phase after rollback
: same content as in the previous standby
phasesignature_algorithm_suite
should change the algorithm used by new CA issuers when entering init
. Only issued certificates change algorithm if the suite is changed at other times.signature_algorithm_suite
, entering the rollback
phase correctly restores the original issuer, no matter the algorithm.tsh apps login
kubectl get po
after tsh kube login
Verify that SSH works, and that resumable SSH is not interrupted across a Teleport Cloud tenant upgrade. | Standard node | Non-resuming node | Peered node | Agentless node | |
---|---|---|---|---|---|
tsh ssh |
|
|
|
|
|
tsh ssh --no-resume |
|
|
|
|
|
Teleport Connect |
|
|
|
|
|
Web UI (not resuming) |
|
|
|
|
|
OpenSSH (standard tsh config ) |
|
|
|
|
|
OpenSSH (changing ProxyCommand to tsh proxy ssh --no-resume ) |
|
|
|
|
Verify that SSH works, and that resumable SSH is not interrupted across a control plane restart (of either the root or the leaf cluster).
Tunnel node | Direct dial node | |
---|---|---|
tsh ssh |
|
|
tsh ssh --no-resume |
|
|
tsh ssh (from a root cluster) |
|
|
tsh ssh --no-resume (from a root cluster) |
|
|
OpenSSH (without ProxyCommand ) |
n/a |
|
OpenSSH's ssh-keyscan |
n/a |
|
Add a role with pin_source_ip: true
(requires Enterprise) to test IP pinning.
Testing will require changing your IP (that Teleport Proxy sees).
Docs: IP Pinning
tsh ssh
on root clustertsh ssh
on root clustertsh ssh
on leaf clustertsh ssh
on leaf cluster[x] Access Monitoring
[ ] Access List
[ ] Verify Okta Sync Service
okta_import_rule
rule configuration.Verify SAML IdP service provider resource management.
saml_idp_service_provider
resource can be created, updated and deleted with tctl create/update/delete sp.yaml
command.
name
and entity descriptor
.name
, entity_id
, acs_url
.$ tctl idp saml test-attribute-mapping --users <usernames or name of file containing user spec> --sp <name of file containing user spec> --format <json/yaml/defaults to text>
preset: gcp-workforce
, Teleport adds
relay state relay_state: https://console.cloud.google/
value in the resulting resource spec.Region: eu-central-1
EKS with a single node group:
2
, Max: 10
instances.m5.4xlarge
1.29
Teleport cluster (all deployed on the EKS cluster):
Databases:
db.t4g.xlarge
instance class. Accessed through RDS Proxy with single RW endpoint.db.t4g.xlarge
instance class. Accessed through RDS Proxy with single RW endpoint.Note: Databases were configured using discovery running inside the database agent.
tsh bench
commands were executed inside the cluster.
(Doesn't seem a regression. Likely broken in the last few versions.)
Three proxies on tenant4x nodes in each of usw2, use1, euc1, two auths on tenant8x nodes in euc1, cluster configured for higher number of incoming connections, no SNS for audit logs.
Three EKS clusters in usw2, use1, euc1, each with 64 m5.8xlarge nodes running 20k agents each, joining with a static token.
Two m7a.48xlarge runners in usw2 and euc1 running tbot and ssh (the ansible-like load), connecting to all 60k nodes and running 4 commands every 360 seconds on average (see assets/loadtest/ansible-like
), so about 1300 new sessions per second. Nodes were referenced by hostname but tbot was configured to use proxy templates and nodes were searched by predicate.
The agents were spun up and left alone for about 15 minutes, then the sessions were started and ran for about 30 minutes.
From a cold start (unused Cloud staging tenant) the dynamodb table in used took a while to internally scale, with throttling that lasted for 5 or 10 minutes; no problem after that.
The ansible-like setup isn't capable of handling new or dropped agents, and in a first attempt (with clusters running 40 nodes instead of 64) some went missing because the kubernetes node they were on just died; a new set of agents was then spun up, with GOMAXPROCS set to 2 and memory request and limit set to 350Mi, which fit with a bit of extra headroom on 64 nodes and resulted in no more errors.
[^1]: 30k tests were performed using the simulated method described in the v14 Test Plan
SSO MFA ceremony breaks tctl
on auth-only clusters: https://github.com/gravitational/teleport/issues/48633
Three proxies on tenant4x nodes in each of usw2, use1, euc1. Four auths on tenant8x nodes, two in usw2 and two in euc1.
Three EKS clusters in usw2, use1, euc1, each with 50 m5.8xlarge nodes running 20k agents each, joining with a static token.
Two m7a.48xlarge runners in usw2 and euc1 running tbot and ssh (the ansible-like load), connecting to all 60k nodes and running 4 commands every 360 seconds on average (see assets/loadtest/ansible-like
), so about 1300 new sessions per second. Nodes were referenced by hostname but tbot was configured to use proxy templates and nodes were searched by predicate.
The agents were spun up and left alone for a few minutes. The first set of 60k sessions were started, cluster was left to stabilize for a bit, then the second set was started.
Initial attempt appeared to overload some element of cloud-staging networking stack, resulting in a large number of failed connections with no apparent teleport-originating cause. Subsequent attempts succeeded.
non-blocking: IBM docs are out of date
Manual Testing Plan
Below are the items that should be manually tested with each release of Teleport. These tests should be run on both a fresh installation of the version to be released as well as an upgrade of the previous version of Teleport.
[x] Adding nodes to a cluster @eriktate
[x] Labels @eriktate
[x] Trusted Clusters @bernardjkim
[x] RBAC @eriktate
Make sure that invalid and valid attempts are reflected in audit log. Do this with both Teleport and Agentless nodes.
[x] Verify that custom PAM environment variables are available as expected. @atburke
[x] Users @codingllama
With every user combination, try to login and signup with invalid second factor, invalid password to see how the system reacts.
WebAuthn in the release
tsh
binary is implemented using libfido2 for linux/macOS. Ask for a statically built pre-release binary for realistic tests. (tsh fido2 diag
should work in our binary.) Webauthn in Windows build is implemented usingwebauthn.dll
. (tsh webauthn diag
with security key selected in dialog should work.)Touch ID requires a signed
tsh
, ask for a signed pre-release binary so you may run the tests.Windows Webauthn requires Windows 10 19H1 and device capable of Windows Hello.
[x] Adding Users OTP
[x] Adding Users WebAuthn
[x] macOS/Linux
[x] Windows
[x] Adding Users via platform authenticator
[x] Touch ID
[x] Windows Hello
[x] Managing MFA devices
[x] Add an OTP device with
tsh mfa add
[x] Add a WebAuthn device with
tsh mfa add
[x] Add platform authenticator device with
tsh mfa add
[x] List MFA devices with
tsh mfa ls
[x] Remove an OTP device with
tsh mfa rm
[x] Remove a WebAuthn device with
tsh mfa rm
[x] Removing the last MFA device on the user fails
[x] Login with MFA
[x] Add an OTP, a WebAuthn and a Touch ID/Windows Hello device with
tsh mfa add
[x] Login via OTP
[x] Login via WebAuthn
[x] Login via platform authenticator
[x] Login via WebAuthn using an U2F/CTAP1 device
[x] Login OIDC
[x] Login SAML
[x] Login GitHub
[x] Deleting Users
[x] Backends @rosstimothy
[x] Session Recording @capnspacehook
[x] Enhanced Session Recording @Joerger
disk
,command
andnetwork
events are being logged.enhanced_recording
role option.[x] Auditd @Joerger
[x] Audit Log @rosstimothy
server_id
is the ID of the node in "session_recording: node" modeserver_id
is the ID of the node in "session_recording: proxy" modeforwarded_by
is the ID of the proxy in "session_recording: proxy" modeNode/Proxy ID may be found at
/var/lib/teleport/host_uuid
in the corresponding machine.Node IDs may also be queried via
tctl nodes ls
.scp
commands are recordedSubsystem testing may be achieved using both Recording Proxy mode and OpenSSH integration.
Assuming the proxy is
proxy.example.com:3023
andnode1
is a node running OpenSSH/sshd, you may use the following command to trigger a subsystem audit log:[x] External Audit Storage @nklaassen
External Audit Storage must be tested on an Enterprise Cloud tenant. Instructions for deploying a custom release to a cloud staging tenant: https://github.com/gravitational/teleport.e/blob/master/dev-deploy.md
tsh play <session-id>
works[x] Interact with a cluster using
tsh
@capnspacehookThese commands should ideally be tested for recording and non-recording modes as they are implemented in a different ways.
[x] Interact with a cluster using
ssh
@Joerger Make sure to test both recording and regular proxy modes.[x] Verify proxy jump functionality @atburke Log into leaf cluster via root, shut down the root proxy and verify proxy jump works.
[x] Interact with a cluster using the Web UI @atburke
[x] X11 Forwarding @Joerger
xeyes
andxclip
:apt install x11-apps xclip
xeyes
. Thenbrew install xclip
.ssh_service.x11.enabled = yes
tsh ssh -X user@node xeyes
tsh ssh -X root@node xeyes
tsh ssh -Y server01 "echo Hello World | xclip -sel c && xclip -sel c -o"
should print "Hello World"tsh ssh -X server01 "echo Hello World | xclip -sel c && xclip -sel c -o"
should fail with "BadAccess" X errorUser accounting @atburke
/var/run/utmp
on Linux./var/log/wtmp
on Linux.Combinations @Joerger
For some manual testing, many combinations need to be tested. For example, for interactive sessions the 12 combinations are below.
Add an agentless Node in a local cluster.
Add a Teleport Node in a local cluster.
Add an agentless Node in a remote (leaf) cluster.
Add a Teleport Node in a remote (leaf) cluster.
Teleport with EKS/GKE @tigrato
Teleport with multiple Kubernetes clusters @tigrato
Note: you can use GKE or EKS or minikube to run Kubernetes clusters. Minikube is the only caveat - it's not reachable publicly so don't run a proxy there.
tsh login
, check thattsh kube ls
has your clusterkubectl get nodes
,kubectl exec -it $SOME_POD -- sh
tsh login
, check thattsh kube ls
has your clusterkubectl get nodes
,kubectl exec -it $SOME_POD -- sh
tsh login
, check thattsh kube ls
has your clusterkubectl get nodes
,kubectl exec -it $SOME_POD -- sh
tsh login
, check thattsh kube ls
has both clusterstsh kube login
kubectl get nodes
,kubectl exec -it $SOME_POD -- sh
on the new clustertsh login
, check thattsh kube ls
has all clustersname
andlabels
Step 2
login value matching the rowsname
columnname
orlabels
in the search bar worksname
columKubernetes exec via WebSockets/SPDY @tigrato
To control usage of websockets on kubectl side environment variable
KUBECTL_REMOTE_COMMAND_WEBSOCKETS
can be used:KUBECTL_REMOTE_COMMAND_WEBSOCKETS=true kubectl -v 8 exec -n namespace podName -- /bin/bash --version
. With-v 8
logging level you should be able to seeX-Stream-Protocol-Version: v5.channel.k8s.io
in case kubectl is connected over websockets to Teleport. To do tests you'll need kubectl version at least 1.29, Kubernetes cluster v1.29 or less (doesn't support websockets stream protocol v5) and cluster v1.30 (does support it by default) and to access them both through kube agent and kubeconfig each.KUBECTL_REMOTE_COMMAND_WEBSOCKETS=false
KUBECTL_REMOTE_COMMAND_WEBSOCKETS=true
X-Stream-Protocol-Version: v5.channel.k8s.io
)X-Stream-Protocol-Version: v5.channel.k8s.io
)Kubernetes auto-discovery @tigrato
tctl create
.tctl create -f
.tctl rm
.Kubernetes Secret Storage @hugoShaka
Statefulset
Kubernetes Pod RBAC @tigrato
kubernetes_resources
:{"kind":"pod","name":"*","namespace":"*"}
- must allow access to every pod.{"kind":"pod","name":"<somename>","namespace":"*"}
- must allow access to pod<somename>
in every namespace.{"kind":"pod","name":"*","namespace":"<somenamespace>"}
- must allow access to any pod in<somenamespace>
namespace.*
wildcards -<some-name>-*
and regex forname
andnamespace
fields.go-client
.kubernetes_resources
:kubernetes_groups
that denies exec into a podsearch_as_roles
is not allowed.Teleport with FIPS mode @eriktate
ACME @timothyb89
Migrations @timothyb89
Command Templates
When interacting with a cluster, the following command templates are useful:
OpenSSH
Teleport
Teleport with SSO Providers
GitHub External SSO @greedy52
tctl sso
family of commands @TenerFor help with setting up sso connectors, check out the [Quick GitHub/SAML/OIDC Setup Tips]
tctl sso configure
helps to construct a valid connector definition:tctl sso configure github ...
creates valid connector definitionstctl sso configure oidc ...
creates valid connector definitionstctl sso configure saml ...
creates valid connector definitionstctl sso test
test a provided connector definition, which can be loaded from file or piped in withtctl sso configure
ortctl get --with-secrets
. Valid connectors are accepted, invalid are rejected with sensible error messages.tctl sso test
.SSO login on remote host @atburke
tsh
should be running on a remote host (e.g. over an SSH session) and use the local browser to complete and SSO login. Runtsh login --callback <remote.host>:<port> --bind-addr localhost:<port> --auth <auth>
on the remote host. Note that the--callback
URL must be able to resolve to the--bind-addr
over HTTPS.Teleport Plugins @EdwardDowling @bernardjkim
Teleport Operator @hugoShaka
teleport-cluster
Helm chart and the operator enabledAWS Node Joining @hugoShaka
Docs
ec2:DescribeInstances
permissions for local account:TELEPORT_TEST_EC2=1 go test ./integration -run TestEC2NodeJoin
TELEPORT_TEST_EC2=1 go test ./integration -run TestIAMNodeJoin
Kubernetes Node Joining @bernardjkim
Azure Node Joining @marcoandredinis
Docs
GCP Node Joining @marcoandredinis
Docs
Cloud Labels @marcoandredinis
foo
:bar
. Verify that a node running on the instance has labelaws/foo=bar
.foo
:bar
. Verify that a node running on the instance has labelazure/foo=bar
.foo
:bar
and tagbaz
:quux
. Verify that a node running on the instance has labelsgcp/label/foo=bar
andgcp/tag/baz=quux
.Passwordless @codingllama
This feature has additional build requirements, so it should be tested with a pre-release build (eg:
https://cdn.teleport.dev/tsh-v16.0.0-alpha.2.pkg
).This sections complements "Users -> Managing MFA devices".
tsh
binaries for each operating system (Linux, macOS and Windows) must be tested separately for FIDO2 items.[x] Diagnostics
Commands should pass all tests.
tsh fido2 diag
(macOS/Linux)tsh touchid diag
(macOS only)tsh webauthnwin diag
(Windows only)[x] Registration
tsh mfa add
, choose WEBAUTHN and passwordless)tsh mfa add
, choose TOUCHID)tsh mfa add
, choose WEBAUTHN and passwordless)[x] Login
tsh login --auth=passwordless
)tsh login --auth=passwordless
)tsh login --auth=passwordless --mfa-mode=cross-platform
uses FIDO2tsh login --auth=passwordless --mfa-mode=platform
uses platform authenticatortsh login --auth=passwordless --mfa-mode=auto
prefers platform authenticatorauth_service.authentication.passwordless = false
)auth_service.authentication.connector_name = passwordless
)tsh login --auth=local
)[x] Touch ID support commands
tsh touchid ls
workstsh touchid rm
works (careful, may lock you out!)Device Trust @codingllama
Device Trust requires Teleport Enterprise.
This feature has additional build requirements, so it should be tested with a pre-release build (eg:
https://cdn.teleport.dev/teleport-ent-v16.0.0-alpha.2-linux-amd64-bin.tar.gz
).Client-side enrollment requires a signed
tsh
for macOS, make sure to use thetsh
binary fromtsh.app
.Additionally, Device Trust Web requires Teleport Connect to be installed (device authentication for the Web is handled by Connect).
A simple formula for testing device authorization is:
[x] Inventory management
tctl devices add
)tctl devices add --enroll
)tctl devices ls
)tctl devices rm
)tctl devices rm
)tctl devices enroll
)tctl devices enroll
)[x] Device enrollment
tsh device enroll
)tsh device enroll
)tsh device enroll
)Linux users need read/write permissions to /dev/tpmrm0. The simplest way is to assign yourself to the
tss
group. See https://goteleport.com/docs/access-controls/device-trust/device-management/#troubleshooting.Note that different accesses have different certificates (Database, Kube, etc).
[x] Device authentication
Confirm that it works by failing first. Most protocols can be tested using device_trust.mode="required". App Access and Desktop Access require a custom role (see enforcing device trust).
For SSO users confirm that device web authentication happens successfully.
[x] Device authorization
[x] Device audit (see lib/events/codes.go)
[x] Binary support
tsh
for macOS gives a sane error message fortsh device enroll
attempts.[x] Device support commands
tsh device collect
(macOS)tsh device asset-tag
(macOS)tsh device collect
(Windows)tsh device asset-tag
(Windows)tsh device collect
(Linux)tsh device asset-tag
(Linux)Hardware Key Support @Joerger
Hardware Key Support is an Enterprise feature and is not available for OSS.
You will need a YubiKey 4.3+ to test this feature.
This feature has additional build requirements, so it should be tested with a pre-release build (eg:
https://cdn.teleport.dev/teleport-ent-v16.0.0-alpha.2-linux-amd64-bin.tar.gz
).Server Access
This test should be carried out on Linux, MacOS, and Windows.
Set
auth_service.authentication.require_session_mfa: hardware_key_touch
in your cluster auth settings and login.tsh login
tsh ssh
tsh proxy db --tunnel
HSM Support @nklaassen
Docs
Run the full test suite with each HSM/KMS:
Moderated session @eriktate
Create two Teleport users, a moderator and a user. Configure Teleport roles to require that the moderator moderate the user's sessions. Use
TELEPORT_HOME
totsh login
as the user in one terminal, and the moderator in another.Ensure the default
terminationPolicy
ofterminate
has not been changed.For each of the following cases, create a moderated session with the user using
tsh ssh
and join this session with the moderator usingtsh join --role moderator
:Ctrl+C
in the user terminal disconnects the moderator as the session has ended.Ctrl+C
in the moderator terminal disconnects the moderator and terminates the user's session as the session no longer has a moderator.t
in the moderator terminal terminates the session for all participants.Performance @rosstimothy @fspmarshall @espadolini
Scaling Test
Scale up the number of nodes/clusters a few times for each configuration below.
1) Verify that there are no memory/goroutine/file descriptor leaks 2) Compare the baseline metrics with the previous release to determine if resource usage has increased 3) Restart all Auth instances and verify that all nodes/clusters reconnect
Perform simulated load testing on non-cloud backends
Perform ansible-like load testing on cloud backends
Perform the following additional scaling tests on a single backend:
Soak Test
Run 30 minute soak test directly against direct and tunnel nodes and via label based matching. Tests should be run against a Cloud tenant.
Concurrent Session Test
Run a concurrent session test that will spawn 5 interactive sessions per node in the cluster:
Robustness
Connectivity Issues:
[x] Verify that a lack of connectivity to Auth does not prevent access to resources which do not require a moderated session and in async recording mode from an already issued certificate.
[x] Verify that a lack of connectivity to Auth prevents access to resources which require a moderated session and in async recording mode from an already issued certificate.
[x] Verify that an open session is not terminated when all Auth instances are restarted.
Teleport with Cloud Providers
AWS @hugoShaka
GCP @marcoandredinis
IBM @hugoShaka
Application Access @gabrielcorado
debug_app: true
works.name.rootProxyPublicAddr
and well aspublicAddr
.name.rootProxyPublicAddr
.app.session.start
andapp.session.chunk
events are created in the Audit Log.app.session.chunk
points to a 5 minute session archive with multipleapp.session.request
events inside.tsh play <chunk-id>
can fetch and print a session chunk archive.tsh apps login
.tsh
commands.tsh aws
tsh aws --endpoint-url
(this is a hidden flag)tsh apps login
.tsh az
commands.tsh proxy az
andaz
commands.tsh apps login
.tsh gcloud
commands.tsh gsutil
commands.tsh proxy gcloud
andgcloud
/gsutil
commands.tctl create
.tctl create -f
.tctl rm
.Add Application
links to documentation.Database Access @greedy52
Some tests are marked with "coverved by E2E test" and automatically completed by default. In cases the E2E test is flaky or disabled, deselect the task for manualy testing.
IMPORTANT: for this round of testing, please pick a different signature algorithm suite other than the default
legacy
. See RFD 136. @greedy52 @Tener @GavinFrazarselect pg_sleep(10)
followed by ctrl-c is a good query to test.)valkey
if possible) @GavinFrazarassume_role_arn: ""
andexternal_id: "<id>"
assume_role_arn: ""
andexternal_id: "<id>"
keep
,best_effort_drop
db.session.start
is emitted when you connect.db.session.end
is emitted when you disconnect.db.session.query
is emitted when you execute a SQL query.tsh db ls
shows only databases matching role'sdb_labels
.db_users
.db_names
.db.session.start
is emitted when connection attempt is denied.db_names
.db.session.query
is emitted when command fails due to permissions.tsh db connect
.tctl create
.tctl create -f
.tctl rm
.assume_role_arn
andexternal_id
is set.assume_role_arn
andexternal_id
is set.name
,description
,type
, andlabels
Step 2
login value matching the rowsname
columnlabels
tsh bench
load tests (instructions on Notion -> Database Access -> Load test) @Tenertsh play
) @TenerTLS Routing @greedy52
v2
configuration starts only a single listener for proxy service, in contrast withv1
configuration. Given configuration:There should be total of three listeners, with only
*:3080
for proxy service. Given the configuration above, 3022 and 3025 will be opened for other services.In contrast for the same configuration with version
v1
, there should be additional ports 3023 and 3024.multiplex
modeauth_service.proxy_listener_mode: "multiplex"
web_proxy_addr == tunnel_addr
tsh db connect
works through proxy running inmultiplex
mode @GavinFrazartsh proxy db
with a GUI client.multiplex
modessh -o "ForwardAgent yes" -o "ProxyCommand tsh proxy ssh" user@host.example.com
ssh -o "ForwardAgent yes" -o "ProxyCommand tsh proxy ssh --user=%r --cluster=leaf-cluster %h:%p" user@node.foo.com
tsh ssh
access through proxy running in multiplex modemultiplex
mode, usingtsh
multiplex
mode behind L7 load balancer @greedy52tsh login
andtctl
tsh ssh
andtsh config
tsh proxy db
andtsh db connect
tsh proxy app
andtsh aws
tsh proxy kube