Describe the bug
When docker image name over 3 layers, it caused kubenab replace instead of add private registry to pod(flag REPLACE_REGISTRY_URL=false).
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 119s default-scheduler Successfully assigned repo/nfs-client-provisioner-65cff57884-d7dlq to node3
Normal SandboxChanged 104s (x2 over 111s) kubelet, node3 Pod sandbox changed, it will be killed and re-created.
Warning Failed 45s (x5 over 105s) kubelet, node3 Error: ImagePullBackOff
Normal Pulling 33s (x4 over 112s) kubelet, node3 Pulling image "100.2.0.115:8088/external_storage/nfs-client-provisioner:v3.1.0"
Warning Failed 33s (x4 over 112s) kubelet, node3 Failed to pull image "100.2.0.115:8088/external_storage/nfs-client-provisioner:v3.1.0": rpc error: code = Unknown desc = Error response from daemon: pull access denied for 100.2.0.115:8088/external_storage/nfs-client-provisioner, repository does not exist or may require 'docker login'
Warning Failed 33s (x4 over 112s) kubelet, node3 Error: ErrImagePull
Normal BackOff 20s (x6 over 105s) kubelet, node3 Back-off pulling image "100.2.0.115:8088/external_storage/nfs-client-provisioner:v3.1.0"
Expected behavior
kubenab add private registry to pod.
Should be 100.2.0.115:8088/repo/external_storage/nfs-client-provisioner:v3.1.0
Logs
kubectl logs -f -n kube-system kubenab-5ccbd84cdb-lsbdx
2019/10/15 01:04:14 AdmissionReview Namespace is: repo
2019/10/15 01:04:14 Container Image is repo/external_storage/nfs-client-provisioner:v3.1.0
2019/10/15 01:04:14 Image is not being pulled from Private Registry: repo/external_storage/nfs-client-provisioner:v3.1.0
2019/10/15 01:04:14 Changing image registry to: 100.2.0.115:8088/external_storage/nfs-client-provisioner:v3.1.0
2019/10/15 01:04:14 Serving request: /validate
2019/10/15 01:04:14 {"kind":"AdmissionReview","apiVersion":"admission.k8s.io/v1beta1","request":{"uid":"ae07f519-604e-4a51-879c-58d684eae321","kind":{"group":"","version":"v1","kind":"Pod"},"resource":{"group":"","version":"v1","resource":"pods"},"requestKind":{"group":"","version":"v1","kind":"Pod"},"requestResource":{"group":"","version":"v1","resource":"pods"},"namespace":"repo","operation":"CREATE","userInfo":{"username":"system:serviceaccount:kube-system:replicaset-controller","uid":"b7c19393-6ec2-47a2-8855-d511bfcdede0","groups":["system:serviceaccounts","system:serviceaccounts:kube-system","system:authenticated"]},"object":{"kind":"Pod","apiVersion":"v1","metadata":{"name":"nfs-client-provisioner-65cff57884-d7dlq","generateName":"nfs-client-provisioner-65cff57884-","namespace":"repo","uid":"046c4978-cc0c-4214-b714-e30f2bbc011f","creationTimestamp":"2019-10-15T01:04:14Z","labels":{"app":"nfs-client-provisioner","pod-template-hash":"65cff57884"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"nfs-client-provisioner-65cff57884","uid":"577716ce-c3a1-48d0-8246-006da6b68acf","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"nfs-client-root","nfs":{"server":"100.2.0.5","path":"/opt/share/test"}},{"name":"nfs-client-provisioner-token-2sgsx","secret":{"secretName":"nfs-client-provisioner-token-2sgsx","defaultMode":420}}],"containers":[{"name":"nfs-client-provisioner","image":"100.2.0.115:8088/external_storage/nfs-client-provisioner:v3.1.0","env":[{"name":"PROVISIONER_NAME","value":"fuseim.pri/ifs"},{"name":"NFS_SERVER","value":"100.2.0.5"},{"name":"NFS_PATH","value":"/opt/share/test"}],"resources":{},"volumeMounts":[{"name":"nfs-client-root","mountPath":"/persistentvolumes"},{"name":"nfs-client-provisioner-token-2sgsx","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","nodeSelector":{"role":"master"},"serviceAccountName":"nfs-client-provisioner","serviceAccount":"nfs-client-provisioner","securityContext":{},"imagePullSecrets":[{"name":"regsecret"}],"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true},"status":{"phase":"Pending","qosClass":"BestEffort"}},"oldObject":null,"dryRun":false,"options":{"kind":"CreateOptions","apiVersion":"meta.k8s.io/v1"}}}
Describe the bug When docker image name over 3 layers, it caused kubenab replace instead of add private registry to pod(flag REPLACE_REGISTRY_URL=false).
Attach relevant code:if (len(imageParts) < 3) || repRegUrl(Not quite reasonable here)
To Reproduce Steps to reproduce the behavior:
kubenab
with those Settings: https://github.com/jfrog/kubenab/tree/master/deploymentkubectl describe [POD]
Expected behavior kubenab add private registry to pod.
Should be 100.2.0.115:8088/repo/external_storage/nfs-client-provisioner:v3.1.0
Logs
Additional context None
Versions
kubenab
Version: 0.3.2