Closed pdettori closed 1 year ago
I found a workaround which seems to work, which is to do kubectl edit synctargets.workload.kcp.dev <synctarget name>
and set the correct path for the workspace where the synctarget is hosted:
Change:
...
spec:
supportedAPIExports:
- workspace:
exportName: kubernetes
path: root:compute
...
to:
spec:
supportedAPIExports:
- workspace:
exportName: kubernetes
path: <workspace where SyncTarget is located>
Then all seems to work fine:
kubectl get apiresourceschemas.apis.kcp.dev
NAME AGE
rev-656.services.core 3m49s
rev-661.ingresses.networking.k8s.io 3m49s
rev-662.deployments.apps 3m49s
kubectl get apiexport kubernetes -o yaml
apiVersion: apis.kcp.dev/v1alpha1
kind: APIExport
metadata:
annotations:
extra.apis.kcp.dev/compute.workload.kcp.dev: "true"
kcp.dev/cluster: root:users:zu:yc:kcp-admin:mycompute
workload.kcp.dev/skip-default-object-creation: "true"
creationTimestamp: "2023-01-20T20:28:52Z"
generation: 3
name: kubernetes
resourceVersion: "716"
uid: 0a2e1f3f-1e50-4ef9-b89e-171a3e945243
spec:
identity:
secretRef:
name: kubernetes
namespace: kcp-system
latestResourceSchemas:
- rev-662.deployments.apps
- rev-661.ingresses.networking.k8s.io
- rev-656.services.core
status:
conditions:
- lastTransitionTime: "2023-01-20T20:28:52Z"
status: "True"
type: IdentityValid
- lastTransitionTime: "2023-01-20T20:28:52Z"
status: "True"
type: VirtualWorkspaceURLsReady
identityHash: f30e42f13546e603ddf4fe8f8d6342c6d3cdfe201c8bd61dd952c528ae1e5a25
virtualWorkspaces:
- url: https://192.168.0.104:6443/services/apiexport/root:users:zu:yc:kcp-admin:mycompute/kubernetes
kubectl api-resources | grep deployments
deployments deploy apps/v1 true Deployment
Update: I have found pretty much the same issue with the latest from the main branch. Steps are similar as above, this time doing make build
and using the generated binaries for kcp and the plugins.
k get synctargets.workload.kcp.io control -o yaml
apiVersion: workload.kcp.io/v1alpha1
kind: SyncTarget
metadata:
annotations:
kcp.io/cluster: 341bmcu515qup567
creationTimestamp: "2023-01-20T21:53:56Z"
generation: 1
labels:
internal.workload.kcp.io/key: 2CD5ZItufILqTDeeWTWneTjjAmG7PYUqCvaKtD
name: control
resourceVersion: "719"
uid: 3f1aa7ea-9e25-439e-bd38-d772afa2206d
spec:
supportedAPIExports:
- export: kubernetes
path: root:compute
- export: kubernetes
unschedulable: false
status:
conditions:
- lastTransitionTime: "2023-01-20T21:53:56Z"
message: No heartbeat yet seen
reason: ErrorHeartbeat
severity: Warning
status: "False"
type: Ready
- lastTransitionTime: "2023-01-20T21:53:56Z"
message: No heartbeat yet seen
reason: ErrorHeartbeat
severity: Warning
status: "False"
type: HeartbeatHealthy
syncedResources:
- identityHash: 59b8700706ab8f8369e3f953c321947992f56f2ddcef8cb5d401a4d59408b2dc
resource: services
state: Incompatible
versions:
- v1
- group: networking.k8s.io
identityHash: 59b8700706ab8f8369e3f953c321947992f56f2ddcef8cb5d401a4d59408b2dc
resource: ingresses
state: Incompatible
versions:
- v1
- group: apps
identityHash: 59b8700706ab8f8369e3f953c321947992f56f2ddcef8cb5d401a4d59408b2dc
resource: deployments
state: Incompatible
versions:
- v1
virtualWorkspaces:
- syncerURL: https://192.168.0.104:6443/services/syncer/341bmcu515qup567/control/3f1aa7ea-9e25-439e-bd38-d772afa2206d
upsyncerURL: https://192.168.0.104:6443/services/upsyncer/341bmcu515qup567/control/3f1aa7ea-9e25-439e-bd38-d772afa2206d
The same workaround used for v0.10.0 works for the latest from main as well. Edit SyncTarget
to remove the broken export and set the other one to the correct workspace, in my case I set the path to path: kvdk2spgmbix:mycompute
Thanks to some help from @lionelvillard I was able to make my scenario work for a deployment resource. It looks like things work out for deployment because the root:compute
workspace has already default apiresourceschemas for deployment, ingress and service. I have also tried with a CRD resource and it looks like in this case that resource is added to the export as well. Will do more tests on the main branch and open new issues as needed.
Describe the bug
I am testing the basic scenario of starting kcp, creating a new compute workspace, and generating a syncer to install on a pCluster. When I look at the SyncTarget created in the new compute workspace (in my example
root:users:zu:yc:kcp-admin:mycompute
) it points toroot:compute
workspace instead. Here is what I see even before I start the syncer on the pCluster:Note the status
Incompatible
for each SyncedResource. I have noticed thatroot:compute
has alreadyapiresourceschemas
that are created by default, and the SyncTarget is picking those.After I start the syncer on the pCluster, I see also a lot of errors there:
And on my compute workspace, no apiresourceschemas are created:
But apiresourceimports are created:
and the APIExport does not export any resource:
Steps To Reproduce
kcp start
Expected Behaviour
I expect the SyncTarget to point to the current workspace from where syncer deployment file was generated , and as a result apiresourceschemas being created and apiexport exporting those resources.
Additional Context
No response