Closed lpureenaece closed 2 years ago
Hi, I had a similar issue. I don't know if it can be helpful for you but, in my case, it was some problem in the DNS resolution, it was solved restarting coredns.
No more activity!
Hi, I had a similar issue. I don't know if it can be helpful for you but, in my case, it was some problem in the DNS resolution, it was solved restarting coredns.
I have restarted the coredns by command "kubectl -n kube-system rollout restart deployment coredns" but still same issue.
Hello!
Is this issue different from this one?
This is the same issue, but in previous issue i have created the free5gc and coredns pods in two different namespaces that's why i was facing that issue, when I created both in same namespace the issue has gone.
But at present both are in same namespace still facing the pods init state issue.
NRF has not started due to its init-container which seems to be failing. The NRF init-container basically tries to connect to MongoDB, doing a nc
. Is the mongoDB reachable using the service name?
Hi @lpureenaece
The way the NRF's init container checks for MongoDB readiness has not changed. That means that you are probably encountering DNS problems on your cluster. As @ebucchianeri mentionned, you should try to reach mongodb using its service name from another Pod on the cluster. You can use the busybox image to do it.
Bests
NRF has not started due to its init-container which seems to be failing. The NRF init-container basically tries to connect to MongoDB, doing a
nc
. Is the mongoDB reachable using the service name?
Thanks for the reply @ebucchianeri , Could you please let me know what would be the command to check this mongoDB reachability check using service name?
First, can you share the logs of the init container please? kubectl -n <your-namespace> logs <nrf-pod-name> -c wait-mongo
Then, for debugging, you can run a busybox Pod on the same namespace as MongoDB and then run this command nslookup <mongo-service-name>
inside the Pod. More reading [here]
kubectl -n
logs -c wait-mongo
Seems a problem in service name resolution, I had the same issue and solved restarting coredns, but since this solution does not work in your case maybe you could check the status or log of coredns pods
I have deployed two different kubernetes cluster, one with ntw Flannel & another with Calico, Cluster with flannel ntw free5gc pods are running fine but on the cluster with Calico ntw I am getting this init issue. Have you any idea about it, Please confirm.
I don't think this is due to different CNI plugins. I am using Calico too, without any problems.
The idea is to try a simple application deployment on the cluster where it is malfunctioning and debug the DNS names resolution. https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/
Don't hesitate to re-open if you have more information.
Raouf
Description:
Logs are- cmd- "kubectl -n kube-system get pods --all-namespaces"
cmd- "kubectl describe pod free5gc-1629270501-nrf-694fd8cdd6-cxqvv -n kube-system"
cmd- "kubectl get pvc,pv,svc --all-namespaces -o wide"
cmd- "kubectl get network-Attachment-definitions --all-namespaces"
Please assist me.