kcp-dev / kcp

Kubernetes-like control planes for form-factors and use-cases beyond Kubernetes and container workloads.
https://kcp.io
Apache License 2.0
2.35k stars 381 forks source link

bug: "kubectl kcp workload sync" generates SyncTarget not matching current workspace #2662

Closed pdettori closed 1 year ago

pdettori commented 1 year ago

Describe the bug

I am testing the basic scenario of starting kcp, creating a new compute workspace, and generating a syncer to install on a pCluster. When I look at the SyncTarget created in the new compute workspace (in my example root:users:zu:yc:kcp-admin:mycompute) it points to root:compute workspace instead. Here is what I see even before I start the syncer on the pCluster:

kubectl get synctargets.workload.kcp.dev control -o yaml
apiVersion: workload.kcp.dev/v1alpha1
kind: SyncTarget
metadata:
  annotations:
    kcp.dev/cluster: root:users:zu:yc:kcp-admin:mycompute
  creationTimestamp: "2023-01-20T20:28:52Z"
  generation: 1
  labels:
    internal.workload.kcp.dev/key: 7rzSmzjAZbTuojUVeewmOGHckRDGrJQjwlLare
  name: control
  resourceVersion: "636"
  uid: 6f6e3a9f-74a7-4349-b900-64c096315ce9
spec:
  supportedAPIExports:
  - workspace:
      exportName: kubernetes
      path: root:compute
  - workspace:
      exportName: kubernetes
  unschedulable: false
status:
  conditions:
  - lastTransitionTime: "2023-01-20T20:28:52Z"
    message: No heartbeat yet seen
    reason: ErrorHeartbeat
    severity: Warning
    status: "False"
    type: Ready
  - lastTransitionTime: "2023-01-20T20:28:52Z"
    message: No heartbeat yet seen
    reason: ErrorHeartbeat
    severity: Warning
    status: "False"
    type: HeartbeatHealthy
  syncedResources:
  - identityHash: 351d475fed8dd0cbad1aa698345d72f51644cf84d9d2398668d93b67948134a1
    resource: services
    state: Incompatible
    versions:
    - v1
  - group: networking.k8s.io
    identityHash: 351d475fed8dd0cbad1aa698345d72f51644cf84d9d2398668d93b67948134a1
    resource: ingresses
    state: Incompatible
    versions:
    - v1
  - group: apps
    identityHash: 351d475fed8dd0cbad1aa698345d72f51644cf84d9d2398668d93b67948134a1
    resource: deployments
    state: Incompatible
    versions:
    - v1
  virtualWorkspaces:
  - url: https://192.168.0.104:6443/services/syncer/root:users:zu:yc:kcp-admin:mycompute/control/6f6e3a9f-74a7-4349-b900-64c096315ce9

Note the status Incompatible for each SyncedResource. I have noticed that root:compute has already apiresourceschemas that are created by default, and the SyncTarget is picking those.

After I start the syncer on the pCluster, I see also a lot of errors there:

E0120 20:36:59.556774       1 reflector.go:138] k8s.io/client-go@v0.24.4/tools/cache/reflector.go:167: Failed to watch *unstructured.Unstructured: failed to list *unstructured.Unstructured: the server could not find the requested resource

And on my compute workspace, no apiresourceschemas are created:

kubectl  get apiresourceschemas.apis.kcp.dev 
No resources found

But apiresourceimports are created:

 kubectl get apiresourceimports.apiresource.kcp.dev 
NAME
deployments.control.v1.apps
ingresses.control.v1.networking.k8s.io
services.control.v1.core

and the APIExport does not export any resource:

k get apiexport kubernetes -o yaml
apiVersion: apis.kcp.dev/v1alpha1
kind: APIExport
metadata:
  annotations:
    extra.apis.kcp.dev/compute.workload.kcp.dev: "true"
    kcp.dev/cluster: root:users:zu:yc:kcp-admin:mycompute
    workload.kcp.dev/skip-default-object-creation: "true"
  creationTimestamp: "2023-01-20T20:28:52Z"
  generation: 2
  name: kubernetes
  resourceVersion: "647"
  uid: 0a2e1f3f-1e50-4ef9-b89e-171a3e945243
spec:
  identity:
    secretRef:
      name: kubernetes
      namespace: kcp-system
status:
  conditions:
  - lastTransitionTime: "2023-01-20T20:28:52Z"
    status: "True"
    type: IdentityValid
  - lastTransitionTime: "2023-01-20T20:28:52Z"
    status: "True"
    type: VirtualWorkspaceURLsReady
  identityHash: f30e42f13546e603ddf4fe8f8d6342c6d3cdfe201c8bd61dd952c528ae1e5a25
  virtualWorkspaces:
  - url: https://192.168.0.104:6443/services/apiexport/root:users:zu:yc:kcp-admin:mycompute/kubernetes

Steps To Reproduce

  1. Install kcp v0.10.0 and plugins
  2. Run kcp start
  3. On another terminal, setup KUBECONFIG to point to kcp kubeconfig and then run:
    kubectl ws
    kubectl ws create mycompute --enter
    kubectl kcp workload sync control --syncer-image ghcr.io/kcp-dev/kcp/syncer:v0.10.0 -o ${HOME}/syncer-control.yaml --resources=deployments.apps 
  4. Check SyncTarget
    kubectl get synctargets.workload.kcp.dev control -o yaml
  5. Create a pCluster (kind)
  6. Start syncer on pCluster
    KUBECONFIG=~/.kube/config kubectl apply -f "${HOME}/syncer-control.yaml"
  7. Wait syncer starts, check log for above mentioned errors and verify that no apiresourceschemas are created and apiexport exports no resources.

Expected Behaviour

I expect the SyncTarget to point to the current workspace from where syncer deployment file was generated , and as a result apiresourceschemas being created and apiexport exporting those resources.

Additional Context

No response

pdettori commented 1 year ago

I found a workaround which seems to work, which is to do kubectl edit synctargets.workload.kcp.dev <synctarget name> and set the correct path for the workspace where the synctarget is hosted:

Change:

...
spec:
  supportedAPIExports:
  - workspace:
      exportName: kubernetes
      path: root:compute
...

to:

spec:
  supportedAPIExports:
  - workspace:
      exportName: kubernetes
      path: <workspace where SyncTarget is located>

Then all seems to work fine:

kubectl get apiresourceschemas.apis.kcp.dev 
NAME                                  AGE
rev-656.services.core                 3m49s
rev-661.ingresses.networking.k8s.io   3m49s
rev-662.deployments.apps              3m49s
kubectl get apiexport kubernetes -o yaml
apiVersion: apis.kcp.dev/v1alpha1
kind: APIExport
metadata:
  annotations:
    extra.apis.kcp.dev/compute.workload.kcp.dev: "true"
    kcp.dev/cluster: root:users:zu:yc:kcp-admin:mycompute
    workload.kcp.dev/skip-default-object-creation: "true"
  creationTimestamp: "2023-01-20T20:28:52Z"
  generation: 3
  name: kubernetes
  resourceVersion: "716"
  uid: 0a2e1f3f-1e50-4ef9-b89e-171a3e945243
spec:
  identity:
    secretRef:
      name: kubernetes
      namespace: kcp-system
  latestResourceSchemas:
  - rev-662.deployments.apps
  - rev-661.ingresses.networking.k8s.io
  - rev-656.services.core
status:
  conditions:
  - lastTransitionTime: "2023-01-20T20:28:52Z"
    status: "True"
    type: IdentityValid
  - lastTransitionTime: "2023-01-20T20:28:52Z"
    status: "True"
    type: VirtualWorkspaceURLsReady
  identityHash: f30e42f13546e603ddf4fe8f8d6342c6d3cdfe201c8bd61dd952c528ae1e5a25
  virtualWorkspaces:
  - url: https://192.168.0.104:6443/services/apiexport/root:users:zu:yc:kcp-admin:mycompute/kubernetes
kubectl api-resources | grep deployments
deployments                       deploy       apps/v1                           true         Deployment
pdettori commented 1 year ago

Update: I have found pretty much the same issue with the latest from the main branch. Steps are similar as above, this time doing make build and using the generated binaries for kcp and the plugins.

k get synctargets.workload.kcp.io control -o yaml
apiVersion: workload.kcp.io/v1alpha1
kind: SyncTarget
metadata:
  annotations:
    kcp.io/cluster: 341bmcu515qup567
  creationTimestamp: "2023-01-20T21:53:56Z"
  generation: 1
  labels:
    internal.workload.kcp.io/key: 2CD5ZItufILqTDeeWTWneTjjAmG7PYUqCvaKtD
  name: control
  resourceVersion: "719"
  uid: 3f1aa7ea-9e25-439e-bd38-d772afa2206d
spec:
  supportedAPIExports:
  - export: kubernetes
    path: root:compute
  - export: kubernetes
  unschedulable: false
status:
  conditions:
  - lastTransitionTime: "2023-01-20T21:53:56Z"
    message: No heartbeat yet seen
    reason: ErrorHeartbeat
    severity: Warning
    status: "False"
    type: Ready
  - lastTransitionTime: "2023-01-20T21:53:56Z"
    message: No heartbeat yet seen
    reason: ErrorHeartbeat
    severity: Warning
    status: "False"
    type: HeartbeatHealthy
  syncedResources:
  - identityHash: 59b8700706ab8f8369e3f953c321947992f56f2ddcef8cb5d401a4d59408b2dc
    resource: services
    state: Incompatible
    versions:
    - v1
  - group: networking.k8s.io
    identityHash: 59b8700706ab8f8369e3f953c321947992f56f2ddcef8cb5d401a4d59408b2dc
    resource: ingresses
    state: Incompatible
    versions:
    - v1
  - group: apps
    identityHash: 59b8700706ab8f8369e3f953c321947992f56f2ddcef8cb5d401a4d59408b2dc
    resource: deployments
    state: Incompatible
    versions:
    - v1
  virtualWorkspaces:
  - syncerURL: https://192.168.0.104:6443/services/syncer/341bmcu515qup567/control/3f1aa7ea-9e25-439e-bd38-d772afa2206d
    upsyncerURL: https://192.168.0.104:6443/services/upsyncer/341bmcu515qup567/control/3f1aa7ea-9e25-439e-bd38-d772afa2206d
pdettori commented 1 year ago

The same workaround used for v0.10.0 works for the latest from main as well. Edit SyncTarget to remove the broken export and set the other one to the correct workspace, in my case I set the path to path: kvdk2spgmbix:mycompute

pdettori commented 1 year ago

Thanks to some help from @lionelvillard I was able to make my scenario work for a deployment resource. It looks like things work out for deployment because the root:compute workspace has already default apiresourceschemas for deployment, ingress and service. I have also tried with a CRD resource and it looks like in this case that resource is added to the export as well. Will do more tests on the main branch and open new issues as needed.