Open hugoShaka opened 1 month ago
There are two ways to solve this:
In this solution, we create a new in-process tbot for each machineID datasource.
Pros:
Cons:
Pros:
Cons:
cc @strideynet
I would love to see something like:
data "teleport_cluster" "my_secret_cluster" {
host = "my-ultra-secret-cluster.com"
}
provider "kubernetes" {
host = data.teleport_cluster.my_secret_cluster.host
cluster_ca_certificate = base64decode(data.teleport_cluster.my_secret_cluster.kubernetes_host_ca)
tls_server_name = data.teleport_cluster.my_secret_cluster.kube_tls_server_name
exec {
api_version = "client.authentication.k8s.io/v1beta1"
args = ["kube", "credentials", "--kube-cluster", var.cluster_name,"--proxy",data.teleport_cluster.my_secret_cluster.host]
command = "tbot"
}
}
If we can remove the call to tbot
, that would be really great.
On Terraform Cloud, there's little possibilities to run this kind of commands except if running a dedicated custom agent (which has further limitations).
If we can remove the call to
tbot
, that would be really great.On Terraform Cloud, there's little possibilities to run this kind of commands except if running a dedicated custom agent (which has further limitations).
We can by replacing the exec call with client_key and client_certificate with pem encoded certs
@tigrato I'd like to have machineID-specific datasources providing the equivalent of a kubeconfig dynamically generated from tbot:
provider "teleport" {
addr = "proxy.example.com:443"
identity_file_path = "terraform-identity/identity"
}
resource "teleport_machine_id_kubernetes" "kubernetes_deployer" {
// when onboarding is not specified, we could use the provider's tbot settings if needed?
onboarding = {
token = "kubernetes-deployer-token"
join_method = "terraform_cloud"
}
kubernetes_cluster = "my-kube-cluster"
}
provider "kubernetes" {
host = teleport_machine_id_kubernetes.kubernetes_deployer.host
client_certificate = base64decode(teleport_machine_id_kubernetes.kubernetes_deployer.client_cert)
client_key = base64decode(teleport_machine_id_kubernetes.kubernetes_deployer.client_key)
cluster_ca_certificate = base64decode(teleport_machine_id_kubernetes.kubernetes_deployer.cluster_ca)
tls_server_name = teleport_machine_id_kubernetes.kubernetes_deployer.cluster_ca.kube_tls_server_name
}
After thinking a bit about the design, I think that treating machineID as a provider would make sense. The API could look like this:
provider "machine_id" {
proxy = "teleport.example.com"
join_token = "foo"
join_method = "terraform_cloud"
}
data "machine_id_kubernetes" "my-cluster" {
kubernetes_cluster = "foobar"
}
provider "kubernetes" {
host = machine_id_kubernetes.my-cluster.host
client_certificate = base64decode(machine_id_kubernetes.my-cluster.client_cert)
client_key = base64decode(machine_id_kubernetes.my-cluster.client_keu)
cluster_ca_certificate = base64decode(machine_id_kubernetes.my-cluster.cluster_ca)
tls_server_name = machine_id_kubernetes.my-cluster.kube_tls_cluster_name
}
Each machineID destination type would have its own datasource. We could do a oneshot bot run on provider config to populate the bot in memory store, then do individual oneshot bot runs for every datasource, with the requested destination.
The main limitation would be that the certs are generated one shot and not renewed (so you'd have like 1 hour of access). But if we make tbot able to dynamically add/remove destinations, this could be fixed later.
What would you like Teleport to do?
As a user, I want the Teleport Terraform provider to obtain credentials and access Teleport Protected Resources (TPRs) so that I can provision my whole infrastructure from Terraform.
What problem does this solve?
Workaround
Run tbot on the side. This does not work for new dynamic resources, and does not work in restricted runtimes such as HCP or Spacelift.