Open hh opened 1 year ago
I see. How are you currently provisioning per-user resources? This is actually a feature we've considered doing in Coder.
I think there may be some limitations with Helm around provisioning multi-namespace resources. @hh - Would it be possible to also provision a coder-logstream-kube per-user/namespace as well? That may be a nice workaround.
Ideally everything runs within the namespace we create for them, what token would the coder-logstream-kube pod use?
https://github.com/cloudnative-coop/space-templates/blob/canon/equipod/namedspaced.tf#L6-L15
resource "null_resource" "namespace" {
# install kubectl
provisioner "local-exec" {
command = "~/kubectl version --client || (curl -L https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubectl -o ~/kubectl && chmod +x ~/kubectl)"
}
provisioner "local-exec" {
command = "~/kubectl create ns ${local.spacename}"
}
provisioner "local-exec" {
command = "~/kubectl -n ${local.spacename} apply -f ${path.module}/manifests/admin-sa.yaml"
}
}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin
automountServiceAccountToken: true
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: admin
rules:
- apiGroups:
- ""
resources:
- "*"
verbs:
- "*"
- apiGroups:
- "*"
resources:
- "*"
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: admin
subjects:
- kind: ServiceAccount
name: admin
I see. Currently, coder-logstream-kube runs within the namespace and doesn't require a token! It uses the token from each workspace's pod spec (which is scoped to only send agent logs/stats for the specific workspace).
helm install coder-logstream-kube coder-logstream-kube/coder-logstream-kube \
--namespace coder \
--set url=<your-coder-url>
@hh are you still looking to do a single logstream-kube
deployment for multiple workspaces? Wondering if this is worth supporting or not. You'll have to be ok with a cluster role/binding if you don't want to limit to a single namespace.
from #28
v0.0.9-rc.0
- use-case: one logstream-kube deployment watching pods in multiple namespaces.if the
namespace
value is unset,logstream-kube
should default to watching pods in all namespaces (assuming the proper permissions). this is not currently the case. my values, installed in thecoder
namespace:USER-SUPPLIED VALUES: url: https://eric-aks.demo.coder.com
my workspace is running in the
coder-workspaces
namespace, with the following role and rolebinding deployed:apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: coder-logstream-kube-role rules: - apiGroups: [""] resources: ["pods", "events"] verbs: ["get", "watch", "list"] - apiGroups: ["apps"] resources: ["replicasets", "events"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: coder-logstream-kube-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: coder-logstream-kube-role subjects: - kind: ServiceAccount name: coder-logstream-kube namespace: coder
From https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ :
There are benefits to deploying per-user namespaces:
We create a namespace per user, and do not destroy it when a workspace is torn down. This allows expensive objects (like cert-manager/letsencrypt certs/dns) to persist and be reused for multiple workspaces (from the same user) to access them.
Some resources we use per user/namespace: