openyurtio / yurt-app-manager

The workload controller manager from NodePool level in OpenYurt cluster
Apache License 2.0
6 stars 1 forks source link

[BUG]RUN kubectl apply -f config/setup/all_in_one.yaml it doesn't create in kube-system namespace #99

Closed Lulucyliu closed 1 year ago

Lulucyliu commented 2 years ago

What happened:

NAMESPACE      NAME                                       READY   STATUS    RESTARTS   AGE
default        yurt-app-manager-5d5c7cbf6d-44l76          1/1     Running   0          4m4s
kube-flannel   kube-flannel-ds-brb9j                      1/1     Running   0          18h
kube-flannel   kube-flannel-ds-z9jbz                      1/1     Running   0          21h
kube-system    coredns-2qc76                              1/1     Running   0          17h
kube-system    coredns-r9lzz                              1/1     Running   0          17h
kube-system    etcd-cloud-node                            1/1     Running   0          21h
kube-system    kube-apiserver-cloud-node                  1/1     Running   0          21h
kube-system    kube-controller-manager-cloud-node         1/1     Running   0          18h
kube-system    kube-proxy-87qhk                           1/1     Running   0          18h
kube-system    kube-proxy-8fqvn                           1/1     Running   0          18h
kube-system    kube-scheduler-cloud-node                  1/1     Running   0          21h
kube-system    yurt-controller-manager-77b97fd47b-hn44t   1/1     Running   0          14h
kube-system    yurt-tunnel-agent-mrpth                    1/1     Running   0          94s
kube-system    yurt-tunnel-server-6fdb679789-gc5nq        1/1     Running   0          101s

What you expected to happen: yurt-app-manager-5ff95cdbb-bx7lc is created in kube-system namespace

How to reproduce it (as minimally and precisely as possible): Build openyurt according to the official website tutorial image

Anything else we need to know?:

Environment:

others

/kind bug

rambohe-ch commented 2 years ago

@Lulucyliu Thank you for raising issue. The above problem has been solved by #101, and please have a try with the newest all_in_one.yaml file.

Lulucyliu commented 2 years ago

@Lulucyliu Thank you for raising issue. The above problem has been solved by #101, and please have a try with the newest all_in_one.yaml file.

I tried the new all_in_one.yaml if i use the command kubectl apply -f config/setup/all_in_one.yaml,it still doesn't create in kube-system namespaces

[root@cloud-node yurt-app-manager]# kubectl apply -f config/setup/all_in_one.yaml
Warning: resource customresourcedefinitions/nodepools.apps.openyurt.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/nodepools.apps.openyurt.io configured
Warning: resource customresourcedefinitions/yurtappdaemons.apps.openyurt.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/yurtappdaemons.apps.openyurt.io configured
Warning: resource customresourcedefinitions/yurtappsets.apps.openyurt.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/yurtappsets.apps.openyurt.io configured
Warning: resource customresourcedefinitions/yurtingresses.apps.openyurt.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/yurtingresses.apps.openyurt.io configured
serviceaccount/yurt-app-manager created
secret/yurt-app-manager created
clusterrole.rbac.authorization.k8s.io/yurt-app-manager created
clusterrolebinding.rbac.authorization.k8s.io/yurt-app-manager created
role.rbac.authorization.k8s.io/yurt-app-manager created
rolebinding.rbac.authorization.k8s.io/yurt-app-manager created
service/yurt-app-manager-webhook created
deployment.apps/yurt-app-manager created
mutatingwebhookconfiguration.admissionregistration.k8s.io/yurt-app-manager created
validatingwebhookconfiguration.admissionregistration.k8s.io/yurt-app-manager created
serviceaccount/yurt-app-manager-admission created
clusterrole.rbac.authorization.k8s.io/yurt-app-manager-admission created
clusterrolebinding.rbac.authorization.k8s.io/yurt-app-manager-admission created
role.rbac.authorization.k8s.io/yurt-app-manager-admission created
rolebinding.rbac.authorization.k8s.io/yurt-app-manager-admission created
job.batch/yurt-app-manager-admission-create created
job.batch/yurt-app-manager-admission-patch created
[root@cloud-node yurt-app-manager]# kubectl get pods -A -o wide
NAMESPACE      NAME                                       READY   STATUS      RESTARTS   AGE    IP              NODE           NOMINATED NODE   READINESS GATES
default        busybox                                    1/1     Running     1          82m    10.244.1.14     edge-pi-node   <none>           <none>
default        yurt-app-manager-5d5c7cbf6d-f6gqc          1/1     Running     0          18s    10.244.1.15     edge-pi-node   <none>           <none>
default        yurt-app-manager-admission-create-wllvg    0/1     Completed   0          17s    10.244.1.16     edge-pi-node   <none>           <none>
default        yurt-app-manager-admission-patch-7sv7z     0/1     Completed   0          17s    10.244.1.17     edge-pi-node   <none>           <none>
kube-flannel   kube-flannel-ds-mrbtr                      1/1     Running     0          71m    133.133.135.9   edge-pi-node   <none>           <none>
kube-flannel   kube-flannel-ds-vgvbg                      1/1     Running     0          71m    133.133.135.8   cloud-node     <none>           <none>
kube-system    coredns-k2gl6                              1/1     Running     0          96m    10.244.1.13     edge-pi-node   <none>           <none>
kube-system    coredns-r9lzz                              1/1     Running     0          19h    10.244.0.5      cloud-node     <none>           <none>
kube-system    etcd-cloud-node                            1/1     Running     0          23h    133.133.135.8   cloud-node     <none>           <none>
kube-system    kube-apiserver-cloud-node                  1/1     Running     0          23h    133.133.135.8   cloud-node     <none>           <none>
kube-system    kube-controller-manager-cloud-node         1/1     Running     0          20h    133.133.135.8   cloud-node     <none>           <none>
kube-system    kube-proxy-44p7f                           1/1     Running     0          83m    133.133.135.8   cloud-node     <none>           <none>
kube-system    kube-proxy-7497n                           1/1     Running     0          83m    133.133.135.9   edge-pi-node   <none>           <none>
kube-system    kube-scheduler-cloud-node                  1/1     Running     0          23h    133.133.135.8   cloud-node     <none>           <none>
kube-system    yurt-controller-manager-77b97fd47b-hn44t   1/1     Running     0          16h    133.133.135.8   cloud-node     <none>           <none>
kube-system    yurt-hub-edge-pi-node                      1/1     Running     0          101m   133.133.135.9   edge-pi-node   <none>           <none>
kube-system    yurt-tunnel-agent-mrpth                    1/1     Running     0          115m   133.133.135.9   edge-pi-node   <none>           <none>
kube-system    yurt-tunnel-server-6fdb679789-gc5nq        1/1     Running     0          116m   133.133.135.8   cloud-node     <none>           <none>
[root@cloud-node yurt-app-manager]# kubectl get pods -A -o wide
NAMESPACE      NAME                                       READY   STATUS      RESTARTS   AGE    IP              NODE           NOMINATED NODE   READINESS GATES
default        busybox                                    1/1     Running     1          82m    10.244.1.14     edge-pi-node   <none>           <none>
default        yurt-app-manager-5d5c7cbf6d-f6gqc          1/1     Running     0          25s    10.244.1.15     edge-pi-node   <none>           <none>
default        yurt-app-manager-admission-create-wllvg    0/1     Completed   0          24s    10.244.1.16     edge-pi-node   <none>           <none>
default        yurt-app-manager-admission-patch-7sv7z     0/1     Completed   0          24s    10.244.1.17     edge-pi-node   <none>           <none>
kube-flannel   kube-flannel-ds-mrbtr                      1/1     Running     0          71m    133.133.135.9   edge-pi-node   <none>           <none>
kube-flannel   kube-flannel-ds-vgvbg                      1/1     Running     0          71m    133.133.135.8   cloud-node     <none>           <none>
kube-system    coredns-k2gl6                              1/1     Running     0          96m    10.244.1.13     edge-pi-node   <none>           <none>
kube-system    coredns-r9lzz                              1/1     Running     0          19h    10.244.0.5      cloud-node     <none>           <none>
kube-system    etcd-cloud-node                            1/1     Running     0          23h    133.133.135.8   cloud-node     <none>           <none>
kube-system    kube-apiserver-cloud-node                  1/1     Running     0          23h    133.133.135.8   cloud-node     <none>           <none>
kube-system    kube-controller-manager-cloud-node         1/1     Running     0          20h    133.133.135.8   cloud-node     <none>           <none>
kube-system    kube-proxy-44p7f                           1/1     Running     0          83m    133.133.135.8   cloud-node     <none>           <none>
kube-system    kube-proxy-7497n                           1/1     Running     0          83m    133.133.135.9   edge-pi-node   <none>           <none>
kube-system    kube-scheduler-cloud-node                  1/1     Running     0          23h    133.133.135.8   cloud-node     <none>           <none>
kube-system    yurt-controller-manager-77b97fd47b-hn44t   1/1     Running     0          16h    133.133.135.8   cloud-node     <none>           <none>
kube-system    yurt-hub-edge-pi-node                      1/1     Running     0          101m   133.133.135.9   edge-pi-node   <none>           <none>
kube-system    yurt-tunnel-agent-mrpth                    1/1     Running     0          116m   133.133.135.9   edge-pi-node   <none>           <none>
kube-system    yurt-tunnel-server-6fdb679789-gc5nq        1/1     Running     0          116m   133.133.135.8   cloud-node     <none>           <none>

if i use the command kubectl apply -f config/setup/all_in_one.yaml -n kube-system,it will have the following other error information

[root@cloud-node yurt-app-manager]# kubectl apply -f config/setup/all_in_one.yaml -n kube-system
customresourcedefinition.apiextensions.k8s.io/nodepools.apps.openyurt.io created
customresourcedefinition.apiextensions.k8s.io/yurtappdaemons.apps.openyurt.io created
customresourcedefinition.apiextensions.k8s.io/yurtappsets.apps.openyurt.io created
customresourcedefinition.apiextensions.k8s.io/yurtingresses.apps.openyurt.io created
serviceaccount/yurt-app-manager created
secret/yurt-app-manager created
clusterrole.rbac.authorization.k8s.io/yurt-app-manager created
clusterrolebinding.rbac.authorization.k8s.io/yurt-app-manager created
role.rbac.authorization.k8s.io/yurt-app-manager created
rolebinding.rbac.authorization.k8s.io/yurt-app-manager created
deployment.apps/yurt-app-manager created
mutatingwebhookconfiguration.admissionregistration.k8s.io/yurt-app-manager created
validatingwebhookconfiguration.admissionregistration.k8s.io/yurt-app-manager created
clusterrole.rbac.authorization.k8s.io/yurt-app-manager-admission created
clusterrolebinding.rbac.authorization.k8s.io/yurt-app-manager-admission created
the namespace from the provided object "default" does not match the namespace "kube-system". You must pass '--namespace=default' to perform this operation.
the namespace from the provided object "default" does not match the namespace "kube-system". You must pass '--namespace=default' to perform this operation.
the namespace from the provided object "default" does not match the namespace "kube-system". You must pass '--namespace=default' to perform this operation.
the namespace from the provided object "default" does not match the namespace "kube-system". You must pass '--namespace=default' to perform this operation.
the namespace from the provided object "default" does not match the namespace "kube-system". You must pass '--namespace=default' to perform this operation.
the namespace from the provided object "default" does not match the namespace "kube-system". You must pass '--namespace=default' to perform this operation.
Lulucyliu commented 2 years ago

Specify the namespace through the helm instruction installation. it works. helm install yurt-app-manager ./charts/yurt-app-manager/ -n kube-system

huiwq1990 commented 2 years ago

@Lulucyliu you can follow the doc https://github.com/openyurtio/openyurt-helm#usage to install yurt-app-manager.

rambohe-ch commented 1 year ago

@Lulucyliu End users are recommended to use helm charts to install yurt-app-manager, and all_in_one.yaml file has been removed, so i will close this issue.