Open KrishnaKoppineni opened 5 years ago
[root@prod-app1 ~]# kubectl get pod -n capiot
NAME READY STATUS RESTARTS AGE
b2b-6d44c5d847-82zgp 1/1 Running 0 28h
b2bgw-66f9f95c85-j9pxf 0/1 CrashLoopBackOff 12 4h12m
dm-5cbd787874-bp6ld 1/1 Running 0 28h
gw-59f69487f7-8f7j8 0/1 CrashLoopBackOff 7 28h
mon-5fb8d485cb-jrt76 0/1 CrashLoopBackOff 8 28h
nats-c666bb65b-qlg6g 1/1 Running 0 28h
ne-f9b6468cc-tx74j 0/1 CrashLoopBackOff 7 28h
nginx-859c9759f8-2jqj6 1/1 Running 0 28h
pm-64f765f7f5-fngbw 0/1 CrashLoopBackOff 7 28h
redis-547fdbb749-nfw5r 1/1 Running 0 28h
sec-5bcb5f85fb-752xv 0/1 CrashLoopBackOff 7 4h12m
sm-59677d5bcc-l72g8 0/1 Running 7 28h
user-6c886f554-vcwl2 0/1 CrashLoopBackOff 8 28h
wf-d85b7c498-gpkpp 0/1 CrashLoopBackOff 9 28h
[root@prod-app1 ~]#
When look into one of the pod logs it gives the following error "Not able to connect to Mongodb"
> **[root@prod-app1 ~]# kubectl logs -f -n capiot sec-5bcb5f85fb-752xv
WARNING: No configurations found in configuration directory:/app/config
WARNING: To disable this warning set SUPPRESS_NO_CONFIG_WARNING in the environment.
[2019-09-04T12:12:57.794] [INFO] [security] [sec-5bcb5f85fb-752xv] - Server started on port 10007
[2019-09-04T12:13:02.443] [ERROR] [odp-utils-nats-streaming] - Could not connect to server: Error: getaddrinfo EAI_AGAIN nats.capiot:4222
[2019-09-04T12:13:02.577] [ERROR] security [sec-5bcb5f85fb-752xv] - ERROR :: Unable to connect to Kubernetes API server
[2019-09-04T12:13:02.579] [INFO] security [sec-5bcb5f85fb-752xv] -
[2019-09-04T12:13:13.642] [ERROR] [security] [sec-5bcb5f85fb-752xv] - ------------------------- Database connection lost -------------------------
[2019-09-04T12:13:13.644] [ERROR] [security] [sec-5bcb5f85fb-752xv] - { MongoNetworkError: failed to connect to server [172.31.1.231:27019] on first connect [MongoNetworkError: connect EHOSTUNREACH 172.31.1.231:27019]
at Pool.<anonymous> (/app/node_modules/mongodb-core/lib/topologies/server.js:564:11)
at emitOne (events.js:116:13)
at Pool.emit (events.js:211:7)
at Connection.<anonymous> (/app/node_modules/mongodb-core/lib/connection/pool.js:317:12)
at Object.onceWrapper (events.js:317:30)
at emitTwo (events.js:126:13)
at Connection.emit (events.js:214:7)
at Socket.<anonymous> (/app/node_modules/mongodb-core/lib/connection/connection.js:246:50)
at Object.onceWrapper (events.js:315:30)
at emitOne (events.js:116:13)
at Socket.emit (events.js:211:7)
at emitErrorNT (internal/streams/destroy.js:66:8)
at _combinedTickCallback (internal/process/next_tick.js:139:11)
at process._tickDomainCallback (internal/process/next_tick.js:219:9)
name: 'MongoNetworkError',
errorLabels: [ 'TransientTransactionError' ],
[Symbol(mongoErrorContextSymbol)]: {} }
[root@prod-app1 ~]#**
But I am able to telnet from the host machine.
> **[root@prod-app1 ~]# curl http://172.31.1.231:27017
It looks like you are trying to access MongoDB over HTTP on the native driver port.
[root@prod-app1 ~]#**
After Restarting the weavenet pod it is working fine and the application pods are able to connect the mongodb......But this issue is getting resolved for a specific amount of time only. After some using the again it is the same issue.
[root@prod-app1 ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-7xqff 1/1 Running 4 5d5h
coredns-5c98db65d4-wpkkl 1/1 Running 4 5d5h
etcd-prod-app1.gati.com 1/1 Running 4 5d5h
kube-apiserver-prod-app1.gati.com 1/1 Running 4 5d5h
kube-controller-manager-prod-app1.gati.com 1/1 Running 4 5d5h
kube-proxy-g8p52 1/1 Running 1 5d4h
kube-proxy-wt2kr 1/1 Running 4 5d5h
kube-scheduler-prod-app1.gati.com 1/1 Running 4 5d5h
weave-net-qcftw 2/2 Running 0 4h14m
[root@prod-app1 ~]# kubectl delete pod -n kube-system weave-net-qcftw
pod "weave-net-qcftw" deleted
[root@prod-app1 ~]#
Now all the pods are in Running state issue resolved for this movement. It will come again after sometime..
[root@prod-app1 ~]# kubectl get pod -n capiot
NAME READY STATUS RESTARTS AGE
b2b-6d44c5d847-j7fnr 1/1 Running 0 95s
b2bgw-66f9f95c85-qlzn8 1/1 Running 3 95s
dm-5cbd787874-c6n8f 1/1 Running 0 95s
gw-59f69487f7-qlskd 1/1 Running 0 95s
mon-5fb8d485cb-hmsms 1/1 Running 0 95s
nats-c666bb65b-gwc7z 1/1 Running 0 95s
ne-f9b6468cc-z9mfn 1/1 Running 0 95s
nginx-859c9759f8-28jlq 1/1 Running 0 95s
pm-64f765f7f5-ghz8b 1/1 Running 0 95s
redis-547fdbb749-pfhn9 1/1 Running 0 94s
sec-5bcb5f85fb-4w4nn 1/1 Running 0 94s
sm-59677d5bcc-7jgwz 1/1 Running 0 94s
user-6c886f554-z629v 1/1 Running 0 94s
wf-d85b7c498-nx2lv 1/1 Running 1 94s
So, please can anyone help me to resolve the issue. Thank you.
Weave-net simply sets up iptables to masqurade the outbound traffic from the pods that is not destined for other pod's (i.e. traffic that leaves weave's overlay network). Check if the traffic is leaving the node but getting dropped in between the nodes? Does your node has multiple interfaces?
Can you tell me how do we know the node has multiple interfaces?
I am running Kubernetes in Offline mode and using the separate volume for working Directory....
Iptables configuration in /etc/sysctl.d/k8s.conf is
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@app-uat ~]# lsmod | grep br_netfilter
br_netfilter 22256 1 xt_physdev
bridge 146976 2 br_netfilter,ebtable_broute
Can you tell me how do we know the node has multiple interfaces?
If you run ip link show
you should get a list of devices, then if you discount the loopback device lo
, any bridges such as docker0
, weave
, any virtual devices beginning v
, whatever is left are interfaces on your node.
The below is the output of ip link show command... Here i am able to see only on interface.
[root@app-uat yamlfile]# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:a9:bf:67 brd ff:ff:ff:ff:ff:ff
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 02:42:6b:4e:6e:9e brd ff:ff:ff:ff:ff:ff
516: vethwepl2b8a17b@if515: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
link/ether 2a:dc:ab:02:37:46 brd ff:ff:ff:ff:ff:ff link-netnsid 3
4: datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/ether 3a:59:78:c1:f2:ff brd ff:ff:ff:ff:ff:ff
6: weave: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 22:80:4d:e6:0e:34 brd ff:ff:ff:ff:ff:ff
7: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 8a:e3:c3:1c:f7:7f brd ff:ff:ff:ff:ff:ff
9: vethwe-datapath@vethwe-bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master datapath state UP mode DEFAULT group default
link/ether 0e:47:11:ff:c4:52 brd ff:ff:ff:ff:ff:ff
522: vethwepl4578f4a@if521: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
link/ether 92:8d:a9:09:8d:33 brd ff:ff:ff:ff:ff:ff link-netnsid 6
10: vethwe-bridge@vethwe-datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
link/ether 66:07:05:f5:ea:e4 brd ff:ff:ff:ff:ff:ff
524: vethwepl452680a@if523: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
link/ether 02:43:a1:a3:d7:83 brd ff:ff:ff:ff:ff:ff link-netnsid 17
526: vethwepl58ab21b@if525: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
link/ether 32:db:10:1b:ce:92 brd ff:ff:ff:ff:ff:ff link-netnsid 16
528: vethwepl705aa52@if527: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
link/ether be:d0:fe:ee:d3:d3 brd ff:ff:ff:ff:ff:ff link-netnsid 19
530: vethwepl604c904@if529: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
link/ether 7e:a3:8c:79:6d:8b brd ff:ff:ff:ff:ff:ff link-netnsid 20
532: vethwepl07bcd2c@if531: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
link/ether e6:70:f1:51:e5:dd brd ff:ff:ff:ff:ff:ff link-netnsid 22
534: vethwepl66458ae@if533: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
link/ether be:be:8d:43:c0:2e brd ff:ff:ff:ff:ff:ff link-netnsid 23
536: vethwepl9aed25b@if535: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
link/ether d2:e3:36:74:d4:9b brd ff:ff:ff:ff:ff:ff link-netnsid 24
538: vethweplc460359@if537: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
link/ether ae:b2:90:c3:0c:29 brd ff:ff:ff:ff:ff:ff link-netnsid 27
540: vethwepled86a8e@if539: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
link/ether ae:4f:a6:b9:d2:56 brd ff:ff:ff:ff:ff:ff link-netnsid 36
542: vethwepld2ac26e@if541: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
link/ether ce:a0:9b:a8:f1:e9 brd ff:ff:ff:ff:ff:ff link-netnsid 37
544: vethwepl645331d@if543: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
link/ether 0a:f4:f0:83:83:b8 brd ff:ff:ff:ff:ff:ff link-netnsid 38
546: vethwepld97fcb6@if545: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
link/ether 22:c1:31:15:c9:7f brd ff:ff:ff:ff:ff:ff link-netnsid 39
548: vethwepl899db35@if547: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
link/ether 3a:92:b8:24:6a:89 brd ff:ff:ff:ff:ff:ff link-netnsid 40
550: vethwepl7660b31@if549: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
link/ether 2e:47:64:27:ea:28 brd ff:ff:ff:ff:ff:ff link-netnsid 1
552: vethwepld7e9f57@if551: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
link/ether c2:75:e9:27:65:53 brd ff:ff:ff:ff:ff:ff link-netnsid 2
554: vethwepl1346b4b@if553: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
link/ether 7a:92:2e:ff:ea:b2 brd ff:ff:ff:ff:ff:ff link-netnsid 4
556: vethwepl9f6de44@if555: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
link/ether c6:cd:98:55:a0:fc brd ff:ff:ff:ff:ff:ff link-netnsid 5
350: veth4c0239b@if349: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default
link/ether 46:8c:26:49:52:75 brd ff:ff:ff:ff:ff:ff link-netnsid 0
373: vethweple2775c0@if372: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
link/ether ce:45:07:f8:4c:2c brd ff:ff:ff:ff:ff:ff link-netnsid 10
377: vethwepld9432f5@if376: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
link/ether ca:22:22:e9:3f:38 brd ff:ff:ff:ff:ff:ff link-netnsid 12
488: vxlan-6784: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65520 qdisc noqueue master datapath state UNKNOWN mode DEFAULT group default qlen 1000
link/ether 92:c3:b7:4e:5e:b2 brd ff:ff:ff:ff:ff:ff
494: vethwepl2d035bd@if493: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
link/ether 66:05:70:f8:29:f5 brd ff:ff:ff:ff:ff:ff link-netnsid 11
496: vethwepl7457b53@if495: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
link/ether ba:e6:8c:42:05:29 brd ff:ff:ff:ff:ff:ff link-netnsid 26
498: vethweplf944488@if497: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
link/ether 9e:e6:86:57:02:ca brd ff:ff:ff:ff:ff:ff link-netnsid 29
506: vethweplb38c623@if505: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
link/ether 9a:1a:79:f6:f8:c6 brd ff:ff:ff:ff:ff:ff link-netnsid 32
510: vethwepl8ea2003@if509: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default
link/ether c6:08:d4:72:7c:99 brd ff:ff:ff:ff:ff:ff link-netnsid 35
HI, I am running simple Kubernetes cluster with single node acts as both (Master & worker) as well....... 1). In that I am facing an issue with Weanet Networking in kubernetes. Suddenly my pods are not able to communicate mongoDB running on another host and both are in same zone......But i am able telnet from the server.
2) After restarting the weavenet pod only the application pods are able to connect mongodb.
K8s version: 1.15 Weavnet Version: 2.5.2 uname -a: Linux prod-app1.gati.com 3.10.0-862.el7.x86_64 Docker version: 18.09.7
Can anyone please help me to resolve the issue. Thank you