kcp-dev / contrib-tmc

An experimental add-on readding some Kubernetes compute APIs and impement transparent multi-cluster scheduling
Apache License 2.0
5 stars 3 forks source link

After a synctarget is drained deployment still shows it is running even though workload in synctarget is gone #110

Open kasturinarra opened 2 years ago

kasturinarra commented 2 years ago

Describe the bug After a synctarget is drained deployment still shows it is running even though workload in synctarget is gone. [knarra@knarra root-org]$ kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE kuard 1/1 1 1 26m

knarra@knarra root-org]$ oc get pods --all-namespaces | grep kuard [knarra@knarra root-org]$

To Reproduce Steps to reproduce the behavior:

  1. Add a synctarget by running the command ‘kubectl kcp workload sync knarracluster1 --syncer-image ghcr.io/kcp-dev/kcp/syncer:cf540bb -o - | KUBECONFIG=/home/knarra/Downloads/kubeconfig_411 kubectl apply -f -’
  2. Create deployment using the command ‘kubectl create deployment kuard --image gcr.io/kuar-demo/kuard-amd64:blue --dry-run=client -o yaml > kuard.yaml & kubectl apply -f kuard.yaml`
  3. Now make sure that deployment has been successfully synced
  4. Run command ‘kubectl kcp workload drain knarracluster1`
  5. Run command 'kubectl get deployment' and it shows below even though workload in synctarget is gone. kcp context: [knarra@knarra root-org]$ kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE kuard 1/1 1 1 26m

    synctarget context [knarra@knarra root-org]$ oc get pods --all-namespaces | grep kuard [knarra@knarra root-org]$

Expected behavior kubectl get deployment should show below since the workload is gone from the synctarget

Additional context Similar issue is seen with 'kubecl kcp workload cordon' as well.

kasturinarra commented 2 years ago

I have found similar issue while testing another case as below:

When a placement is referring a location from different workspace i see that deployment still shows running even after the actual sync target has been removed

Version :

Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v0.22.6", GitCommit:"b49c285c2a30f0dec38b83083e4aaac10dc902cc", GitTreeState:"clean", BuildDate:"2022-08-01T20:14:33Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.3+kcp-v0.7.10", GitCommit:"95799cf", GitTreeState:"dirty", BuildDate:"2022-08-31T06:51:26Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"linux/amd64"} WARNING: version difference between client (0.22) and server (1.24) exceeds the supported minor version skew of +/-1

Describe the bug : Deployment status is still shown as 1/1 even after the synctarget where it is running has been deleted completely when placement is referring a location from different workspace.

To Reproduce :

  1. Create a workspace qetest1

  2. Create a synctarget using the command below kubectl kcp workload sync qecluster1 --syncer-image ghcr.io/kcp-dev/kcp/syncer:v0.7.8 -o - | KUBECONFIG=/home/knarra/Downloads/kubeconfig_411 kubectl apply -f -

  3. Now create a deployment using the command below kubectl create deployment kuard --image gcr.io/kuar-demo/kuard-amd64:blue --dry-run=client -o yaml > kuard.yaml kubectl apply -f kuard.yaml

  4. Verify that deployment is running fine.

  5. Now create another workspace qetest2

  6. Create apibinding in this workspace so that placement in this workspace refers to a location from qetest1. echo "apiVersion: apis.kcp.dev/v1alpha1 kind: APIBinding metadata: name: byo-kubernetes spec: reference: workspace: path: root:users:lw:ao:rh-sso-knarrakcp:qetest1 exportName: kubernetes" | kubectl create -f -

  7. Verify that apibinding and placement gets created and placement refers to location as shown below spec: locationResource: group: workload.kcp.dev resource: synctargets version: v1alpha1 locationSelectors:

    • {} locationWorkspace: root:users:lw:ao:rh-sso-knarrakcp:qetest1 namespaceSelector: {}
  8. Now delete the cluster which has been added as synctarget i.e make sure the cluster does not exist.

  9. Now see that deployment in qetest2 still runs and all the labels and annotations are present on the deployment [knarra@knarra verification-tests]$ kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE kuard 1/1 1 1 87m

apiVersion: v1 items:

Expected Results : Since the deployment is no more running READY column should be shown as 0/1

kasturinarra commented 2 years ago

Deployment still shown as Running when the placement has been deleted. Below are the steps i followed to repro the issue.

  1. Login to kcp-stable env

  2. create apibinding to use shared compute using the command below echo "apiVersion: apis.kcp.dev/v1alpha1
    kind: APIBinding metadata: name: acm-kubernetes spec: reference: workspace: path: root:redhat-acm-compute exportName: kubernetes" | kubectl create -f -

  3. Now create deployment using the commands below kubectl create deployment kuard --image gcr.io/kuar-demo/kuard-amd64:blue --dry-run=client -o yaml > kuard.yaml kubectl apply -f kuard.yaml

  4. Now delete the placement using the command below kubectl delete placement default -o yaml

  5. I see that labels related to the synctarget has been removed from the deployment and ns but deployment still shows running [knarra@knarra ~]$ kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE kuard 1/1 1 1 6m16s

kasturinarra commented 1 year ago

Still see the issue happening, though do not see any annotations related to synctarget on the deployment.

knarra@knarra root-org]$ kubectl get deployment -o yaml
apiVersion: v1
items:
- apiVersion: apps/v1
  kind: Deployment
  metadata:
    annotations:
      kcp.dev/cluster: root:users:qf:vg:rh-sso-knarra-redhat-com:ww1
    creationTimestamp: "2023-02-13T10:13:49Z"
    generation: 1
    labels:
      app: kuard
    name: kuard
    namespace: default
    resourceVersion: "2214"
    uid: 27e03eed-0116-49f3-b1d6-c430424e0cae
  spec:
    replicas: 1
    selector:
      matchLabels:
        app: kuard
    strategy: {}
    template:
      metadata:
        creationTimestamp: null
        labels:
          app: kuard
      spec:
        containers:
        - image: gcr.io/kuar-demo/kuard-amd64:blue
          name: kuard-amd64
          resources: {}
  status:
    availableReplicas: 1
    conditions:
    - lastTransitionTime: "2023-02-13T10:26:35Z"
      lastUpdateTime: "2023-02-13T10:26:35Z"
      message: Deployment has minimum availability.
      reason: MinimumReplicasAvailable
      status: "True"
      type: Available
    - lastTransitionTime: "2023-02-13T10:26:33Z"
      lastUpdateTime: "2023-02-13T10:26:35Z"
      message: ReplicaSet "kuard-7946d4b7b7" has successfully progressed.
      reason: NewReplicaSetAvailable
      status: "True"
      type: Progressing
    observedGeneration: 1
    readyReplicas: 1
    replicas: 1
    updatedReplicas: 1
kind: List
metadata:
  resourceVersion: ""
mjudeikis commented 11 months ago

/transfer-issue contrib-tmc