Closed hh closed 1 year ago
Let's work on adding https://raw.githubusercontent.com/ii/coder-in-vcluster-explore/master/README.org to .sharingio/init
first. That will give us a control plane to interact with from coder our coder templates.
In the end, the template should work in the same way kubernetes does, but within vcluster.
After vcluster pods are deploying correctly, start on https://github.com/sharingio/coder/issues/9
With a working clusterapi w/ vcluster, we can use terraform to get a provisioned cluster in 22 seconds.
ii@sanskar:~/sharingio/coder/.sharing.io/vcluster$ terraform apply -auto-approve ; kubectl get clusters -n coder-ws vcluster1 -w
kubernetes_namespace.work-namespace: Creating...
kubernetes_namespace.work-namespace: Creation complete after 0s [id=coder-ws]
kubernetes_manifest.cluster_vclusters_vcluster1: Creating...
kubernetes_manifest.vcluster_vclusters_vcluster1: Creating...
kubernetes_manifest.vcluster_vclusters_vcluster1: Creation complete after 1s
kubernetes_manifest.cluster_vclusters_vcluster1: Creation complete after 1s
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
NAME PHASE AGE VERSION
vcluster1 Provisioning 0s
vcluster1 Provisioning 22s
Kubeconfig is available as a secret:
ii@sanskar:~/sharingio/coder/.sharing.io/vcluster$ kubectl get secrets -n coder-ws vcluster1-kubeconfig -o jsonpath={.data.value} | base64 -d | grep -v data:
apiVersion: v1
clusters:
- cluster:
server: https://vcluster1.coder-ws.svc:443
name: my-vcluster
contexts:
- context:
cluster: my-vcluster
namespace: default
user: my-vcluster
name: my-vcluster
current-context: my-vcluster
kind: Config
preferences: {}
users:
- name: my-vcluster
user:
This is primarily going to be changes to the template currently sitting in .sharing.io/vcluster
so that it's works similar to examples/templates/kubernetes
. Basically deploying coder-server and coder via the coder specific terraform modules.
@BobyMCbobs I wonder if it's possible to supply the manifests we want clusterapi to use within it's cluster in the cluster definition?
Instead of needing to retrieve the kubectl config at all, let's use a cluster-resources-set:
Also, I think we should write out the kubeconfig to the metadata, since we will not need to retrieve it for other terraform resources the way we currently are, but want to make it easily available to retrieve:
I suspect that will allow us to retrieve the kubeconfig via coder show WORKSPACE
and the webui.
Be sure to bring up cluster-api on the cluster this way:
export EXP_CLUSTER_RESOURCE_SET=true && clusterctl init --infrastructure=vcluster
A bit surprising that it only supports configmaps and secrets: https://github.com/kubernetes-sigs/cluster-api/blob/main/exp/addons/api/v1alpha4/clusterresourceset_types.go#L70-L71
// ResourceRef specifies a resource.
type ResourceRef struct {
// Name of the resource that is in the same namespace with ClusterResourceSet object.
// +kubebuilder:validation:MinLength=1
Name string `json:"name"`
// Kind of the resource. Supported kinds are: Secrets and ConfigMaps.
// +kubebuilder:validation:Enum=Secret;ConfigMap
Kind string `json:"kind"`
}
Added an org file to help drive dev: https://github.com/sharingio/coder/tree/main/examples/templates/vcluster#vcluster-workspace
TLDR:
export WORKSPACE=v7 #change to create new ones as you iterate
coder template push vcluster -d examples/templates/vcluster --yes --parameter-file examples/templates/vcluster/vcluster.param.yaml
coder create $WORKSPACE --template vcluster --parameter-file examples/templates/vcluster/vcluster.param.yaml --yes
unset KUBECONFIG
TMPFILE=$(mktemp -t kubeconfig-XXXXX)
kubectl get secrets -n $WORKSPACE ${WORKSPACE}-kubeconfig -o jsonpath={.data.value} | base64 -d > $TMPFILE
export KUBECONFIG=$TMPFILE
kubectl get ns
coder ssh $WORKSPACE
Added a template for cool.yaml
so we can have an easier to edit manifest:
https://github.com/sharingio/coder/blob/main/examples/templates/vcluster/cluster.tf#L152-L155
"data" = {
"cool.yaml" = templatefile("cool.template.yaml",
{
coder_command = jsonencode(["sh", "-c", coder_agent.main.init_script]),
coder_token = coder_agent.main.token
})
It's likely we can start to get the ingress (via pair or coder_app) working next. Would also be good to understand the 403 and webui sometimes not working.
Pretty cool that k8s is now working, as is coder ssh WORKSPACE into a pod within a cluster:
ii@sanskar:~/sharingio/coder$ coder show v8
┌───────────────────────────────────────────────────────────────────────────────────────────────┐
│ RESOURCE STATUS VERSION ACCESS │
├───────────────────────────────────────────────────────────────────────────────────────────────┤
│ kubernetes_manifest.cluster │
├───────────────────────────────────────────────────────────────────────────────────────────────┤
│ kubernetes_manifest.clusterresourceset_capi_init │
├───────────────────────────────────────────────────────────────────────────────────────────────┤
│ kubernetes_manifest.configmap_capi_init │
│ └─ main (linux, amd64) ⦿ connected v0.9.1+27c8345 coder ssh v8 │
├───────────────────────────────────────────────────────────────────────────────────────────────┤
│ kubernetes_manifest.ingress_capi_kubeapi │
├───────────────────────────────────────────────────────────────────────────────────────────────┤
│ kubernetes_manifest.vcluster │
├───────────────────────────────────────────────────────────────────────────────────────────────┤
│ kubernetes_namespace.workspace │
└───────────────────────────────────────────────────────────────────────────────────────────────┘
ii@sanskar:~/sharingio/coder$ coder ssh v8
coder@code-server-0:~$ ps ax
PID TTY STAT TIME COMMAND
1 ? Ssl 0:00 ./coder agent
33 ? Ssl 0:00 ./coder agent --no-reap
124 ? Sl 0:00 /usr/lib/code-server/lib/node /usr/lib/code-server --auth none --port 13337
125 ? S 0:00 tee code-server-install.log
143 ? Sl 0:00 /usr/lib/code-server/lib/node /usr/lib/code-server/out/node/entry
154 pts/0 Ss 0:00 /bin/bash -l
162 pts/0 R+ 0:00 ps ax
In addition to coder ssh WORKSPACE
we can retrieve the KUBECONFIG for the cluster from the secret to see how the pod is doing:
ii@sanskar:~/sharingio/coder$ unset KUBECONFIG #ensure we are talking to parent / cape cluster
ii@sanskar:~/sharingio/coder$ TMPFILE=$(mktemp -t kubeconfig-XXXXX)
ii@sanskar:~/sharingio/coder$ kubectl get secrets -n $WORKSPACE ${WORKSPACE}-kubeconfig -o jsonpath={.data.value} | base64 -d > $TMPFILE
ii@sanskar:~/sharingio/coder$ export KUBECONFIG=$TMPFILE # talk to created cluster
ii@sanskar:~/sharingio/coder$ kubectl get ns
NAME STATUS AGE
default Active 23s
kube-system Active 23s
kube-public Active 23s
kube-node-lease Active 22s
ii@sanskar:~/sharingio/coder$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/code-server-0 1/1 Running 0 14s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.102.152.165 <none> 443/TCP 27s
NAME READY AGE
statefulset.apps/code-server 1/1 14s
This issue is becoming stale. In order to keep the tracker readable and actionable, I'm going close to this issue in 7 days if there isn't more activity.