Open ifrankrui opened 2 years ago
@ifrankrui
Please update with the following from both the containers:
route -n
ifconfig
ping racnode1
ping racnode2
Please provide the following from both the docker host:
route -n
ifconfig
docker network ls
docker inspect <pub network>
docker inspect <priv network>
racnode1:
[grid@racnode1 ~]$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.17.1 0.0.0.0 UG 0 0 0 eth0
169.254.0.0 0.0.0.0 255.255.224.0 U 0 0 0 eth0
172.16.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
192.168.17.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
[grid@racnode1 ~]$ ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.17.150 netmask 255.255.255.0 broadcast 192.168.17.255
ether 02:42:c0:a8:11:96 txqueuelen 0 (Ethernet)
RX packets 8702 bytes 1852302 (1.7 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 9893 bytes 3049899 (2.9 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 169.254.2.240 netmask 255.255.224.0 broadcast 169.254.31.255
ether 02:42:c0:a8:11:96 txqueuelen 0 (Ethernet)
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.16.1.150 netmask 255.255.255.0 broadcast 172.16.1.255
ether 02:42:ac:10:01:96 txqueuelen 0 (Ethernet)
RX packets 16916 bytes 3377502 (3.2 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 19019 bytes 42043230 (40.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.16.1.160 netmask 255.255.255.0 broadcast 172.16.1.255
ether 02:42:ac:10:01:96 txqueuelen 0 (Ethernet)
eth1:2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.16.1.172 netmask 255.255.255.0 broadcast 172.16.1.255
ether 02:42:ac:10:01:96 txqueuelen 0 (Ethernet)
eth1:3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.16.1.171 netmask 255.255.255.0 broadcast 172.16.1.255
ether 02:42:ac:10:01:96 txqueuelen 0 (Ethernet)
eth1:4: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.16.1.170 netmask 255.255.255.0 broadcast 172.16.1.255
ether 02:42:ac:10:01:96 txqueuelen 0 (Ethernet)
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 245368 bytes 691762059 (659.7 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 245368 bytes 691762059 (659.7 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[grid@racnode1 ~]$ ping racnode1
PING racnode1.example.com (172.16.1.150) 56(84) bytes of data.
64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=1 ttl=64 time=0.025 ms
64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=2 ttl=64 time=0.044 ms
64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=3 ttl=64 time=0.035 ms
64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=4 ttl=64 time=0.059 ms
64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=5 ttl=64 time=0.030 ms
^C
--- racnode1.example.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4104ms
rtt min/avg/max/mdev = 0.025/0.038/0.059/0.013 ms
[grid@racnode1 ~]$ ping racnode2
PING racnode2.example.com (172.16.1.151) 56(84) bytes of data.
64 bytes from racnode2.example.com (172.16.1.151): icmp_seq=1 ttl=64 time=0.055 ms
64 bytes from racnode2.example.com (172.16.1.151): icmp_seq=2 ttl=64 time=0.081 ms
64 bytes from racnode2.example.com (172.16.1.151): icmp_seq=3 ttl=64 time=0.076 ms
64 bytes from racnode2.example.com (172.16.1.151): icmp_seq=4 ttl=64 time=0.100 ms
64 bytes from racnode2.example.com (172.16.1.151): icmp_seq=5 ttl=64 time=0.112 ms
^C
--- racnode2.example.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4113ms
rtt min/avg/max/mdev = 0.055/0.084/0.112/0.022 ms
racnode 2:
[grid@racnode2 ~]$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.17.1 0.0.0.0 UG 0 0 0 eth0
172.16.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
192.168.17.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
[grid@racnode2 ~]$ ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.17.151 netmask 255.255.255.0 broadcast 192.168.17.255
ether 02:42:c0:a8:11:97 txqueuelen 0 (Ethernet)
RX packets 8677 bytes 1980401 (1.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8645 bytes 1811090 (1.7 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.16.1.151 netmask 255.255.255.0 broadcast 172.16.1.255
ether 02:42:ac:10:01:97 txqueuelen 0 (Ethernet)
RX packets 18487 bytes 41965250 (40.0 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 16469 bytes 3312173 (3.1 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 12482 bytes 1433143 (1.3 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 12482 bytes 1433143 (1.3 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[grid@racnode2 ~]$ ping racnode1
PING racnode1.example.com (172.16.1.150) 56(84) bytes of data.
64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=1 ttl=64 time=0.053 ms
64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=2 ttl=64 time=0.099 ms
64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=3 ttl=64 time=0.060 ms
64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=4 ttl=64 time=0.104 ms
64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=5 ttl=64 time=0.051 ms
^C
--- racnode1.example.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4089ms
rtt min/avg/max/mdev = 0.051/0.073/0.104/0.024 ms
[grid@racnode2 ~]$ ping racnode2
PING racnode2.example.com (172.16.1.151) 56(84) bytes of data.
64 bytes from racnode2.example.com (172.16.1.151): icmp_seq=1 ttl=64 time=0.024 ms
64 bytes from racnode2.example.com (172.16.1.151): icmp_seq=2 ttl=64 time=0.039 ms
64 bytes from racnode2.example.com (172.16.1.151): icmp_seq=3 ttl=64 time=0.029 ms
64 bytes from racnode2.example.com (172.16.1.151): icmp_seq=4 ttl=64 time=0.078 ms
64 bytes from racnode2.example.com (172.16.1.151): icmp_seq=5 ttl=64 time=0.077 ms
^C
--- racnode2.example.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4100ms
rtt min/avg/max/mdev = 0.024/0.049/0.078/0.024 ms
[grid@racnode2 ~]$ ping racnode-cman1
PING racnode-cman1.example.com (172.16.1.15) 56(84) bytes of data.
64 bytes from racnode-cman1.example.com (172.16.1.15): icmp_seq=1 ttl=64 time=0.106 ms
64 bytes from racnode-cman1.example.com (172.16.1.15): icmp_seq=2 ttl=64 time=0.093 ms
64 bytes from racnode-cman1.example.com (172.16.1.15): icmp_seq=3 ttl=64 time=0.090 ms
64 bytes from racnode-cman1.example.com (172.16.1.15): icmp_seq=4 ttl=64 time=0.085 ms
64 bytes from racnode-cman1.example.com (172.16.1.15): icmp_seq=5 ttl=64 time=0.046 ms
[grid@racnode2 ~]$ ping 192.168.17.25
PING 192.168.17.25 (192.168.17.25) 56(84) bytes of data.
64 bytes from 192.168.17.25: icmp_seq=1 ttl=64 time=0.093 ms
64 bytes from 192.168.17.25: icmp_seq=2 ttl=64 time=0.073 ms
64 bytes from 192.168.17.25: icmp_seq=3 ttl=64 time=0.063 ms
64 bytes from 192.168.17.25: icmp_seq=4 ttl=64 time=0.115 ms
Docker host:
[root@vm-oracle ansible]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.10.10.1 0.0.0.0 UG 100 0 0 eth0
10.10.10.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
168.63.129.16 10.10.10.1 255.255.255.255 UGH 100 0 0 eth0
169.254.169.254 10.10.10.1 255.255.255.255 UGH 100 0 0 eth0
172.16.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br-8cb17468844a
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.17.0 0.0.0.0 255.255.255.0 U 0 0 0 br-c3348c4e73ef
[root@vm-oracle ansible]# ifconfig
br-8cb17468844a: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.16.1.1 netmask 255.255.255.0 broadcast 172.16.1.255
inet6 fe80::42:37ff:fecd:839f prefixlen 64 scopeid 0x20<link>
ether 02:42:37:cd:83:9f txqueuelen 0 (Ethernet)
RX packets 84 bytes 6922 (6.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 84 bytes 6922 (6.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
br-c3348c4e73ef: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.17.1 netmask 255.255.255.0 broadcast 192.168.17.255
inet6 fe80::42:60ff:fe27:b599 prefixlen 64 scopeid 0x20<link>
ether 02:42:60:27:b5:99 txqueuelen 0 (Ethernet)
RX packets 271 bytes 28746 (28.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 505 bytes 72444 (70.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:8dff:fefa:ff77 prefixlen 64 scopeid 0x20<link>
ether 02:42:8d:fa:ff:77 txqueuelen 0 (Ethernet)
RX packets 13265 bytes 796296 (777.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 24032 bytes 402760634 (384.1 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.10.10.101 netmask 255.255.255.0 broadcast 10.10.10.255
inet6 fe80::222:48ff:fe3f:740 prefixlen 64 scopeid 0x20<link>
ether 00:22:48:3f:07:40 txqueuelen 1000 (Ethernet)
RX packets 5534803 bytes 8086939316 (7.5 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2470473 bytes 185886276 (177.2 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 84 bytes 6922 (6.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 84 bytes 6922 (6.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth2548638: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::1c4d:a5ff:febc:5f9e prefixlen 64 scopeid 0x20<link>
ether 1e:4d:a5:bc:5f:9e txqueuelen 0 (Ethernet)
RX packets 10366 bytes 3115093 (2.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 9174 bytes 1917246 (1.8 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth70cae52: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::682a:b9ff:fe8b:9022 prefixlen 64 scopeid 0x20<link>
ether 6a:2a:b9:8b:90:22 txqueuelen 0 (Ethernet)
RX packets 16562 bytes 3324362 (3.1 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 18583 bytes 41975589 (40.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth7a5ae2a: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::e867:f1ff:fe1a:9de0 prefixlen 64 scopeid 0x20<link>
ether ea:67:f1:1a:9d:e0 txqueuelen 0 (Ethernet)
RX packets 9025 bytes 1863290 (1.7 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 9060 bytes 2032935 (1.9 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth9d443d8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::b8e9:63ff:fec9:49c4 prefixlen 64 scopeid 0x20<link>
ether ba:e9:63:c9:49:c4 txqueuelen 0 (Ethernet)
RX packets 271 bytes 28746 (28.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 505 bytes 72444 (70.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vetha061b16: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::90e6:faff:fee7:8946 prefixlen 64 scopeid 0x20<link>
ether 92:e6:fa:e7:89:46 txqueuelen 0 (Ethernet)
RX packets 19118 bytes 42054189 (40.1 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 17011 bytes 3390102 (3.2 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethc2bb90f: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::d8eb:16ff:fec4:bb15 prefixlen 64 scopeid 0x20<link>
ether da:eb:16:c4:bb:15 txqueuelen 0 (Ethernet)
RX packets 652 bytes 89366 (87.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 800 bytes 135457 (132.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethe3b9deb: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::184c:55ff:fef6:d13a prefixlen 64 scopeid 0x20<link>
ether 1a:4c:55:f6:d1:3a txqueuelen 0 (Ethernet)
RX packets 920642 bytes 4235196164 (3.9 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1072028 bytes 6854864577 (6.3 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@vm-oracle ansible]# docker network ls
NETWORK ID NAME DRIVER SCOPE
cd5b36c86309 bridge bridge local
087f6de1905d host host local
4c0ba1e8832b none null local
c3348c4e73ef rac_priv1_nw bridge local
8cb17468844a rac_pub1_nw bridge local
[root@vm-oracle ansible]# docker network inspect rac_pub1_nw
[
{
"Name": "rac_pub1_nw",
"Id": "8cb17468844a86f008b46c95b2dacf85e37c31835a80495444b0a3df18256fff",
"Created": "2022-03-01T10:37:17.704421146Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.16.1.0/24"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"4d31d422f5f3ae7d5198cf3e6e7f7dfcee3db35a5175b7bd9fab054d4b11dd6e": {
"Name": "racnode-cman",
"EndpointID": "a8ca92dd92c3b9d8dc2d0b7d828e2678187bb65f0c1b2d2fb9280bcf5644afa1",
"MacAddress": "02:42:ac:10:01:0f",
"IPv4Address": "172.16.1.15/24",
"IPv6Address": ""
},
"7764952e8f96d1a1d9369de7f7a979f4a085c5517ff818d6017ac2c6d88fb60c": {
"Name": "racnode1",
"EndpointID": "a92b93779a641ee49340000fb803249e0703acf5d4736449ce567dec31951ee2",
"MacAddress": "02:42:ac:10:01:96",
"IPv4Address": "172.16.1.150/24",
"IPv6Address": ""
},
"82bdae1fb4b778669a307a77eaadc22d4cefce1629ff3f345dbe136bd725a189": {
"Name": "racnode2",
"EndpointID": "de541a06c92875ee2193f9e115a49d8fdbe3d14ac3ea6f6486425a7539060af3",
"MacAddress": "02:42:ac:10:01:97",
"IPv4Address": "172.16.1.151/24",
"IPv6Address": ""
},
"b835a5d904970a7a5ffcf81c101e978a66490628c5757f0bfd44da6210b483f2": {
"Name": "racdns",
"EndpointID": "60f0e85c3459544bfe4ce2e99f13d339c1d77a3272aebfe034156c5dfe7d14ef",
"MacAddress": "02:42:ac:10:01:19",
"IPv4Address": "172.16.1.25/24",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
[root@vm-oracle ansible]# docker network inspect rac_priv1_nw
[
{
"Name": "rac_priv1_nw",
"Id": "c3348c4e73ef1c1ee07d7d4a58427b0208ae753e768d0bfb88b76a5c5aad4efd",
"Created": "2022-03-01T10:37:24.28653065Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.17.0/24"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"7764952e8f96d1a1d9369de7f7a979f4a085c5517ff818d6017ac2c6d88fb60c": {
"Name": "racnode1",
"EndpointID": "d5c34199ca7b0aa509bfad61322bfac5921ba9b41cd795effda5c39006569232",
"MacAddress": "02:42:c0:a8:11:96",
"IPv4Address": "192.168.17.150/24",
"IPv6Address": ""
},
"82bdae1fb4b778669a307a77eaadc22d4cefce1629ff3f345dbe136bd725a189": {
"Name": "racnode2",
"EndpointID": "96f78b8103e6201233145af341e19a392d0216c5247dc068f6805c3f38294cae",
"MacAddress": "02:42:c0:a8:11:97",
"IPv4Address": "192.168.17.151/24",
"IPv6Address": ""
},
"ddad8ec035f2d794bfa67d7f0dee5512c19cfda62ea57169904425527124c1b6": {
"Name": "racnode-storage",
"EndpointID": "57d853322c83472bb397e05993c6c73314124ee9f511cf549a1ec3db0e036f7f",
"MacAddress": "02:42:c0:a8:11:19",
"IPv4Address": "192.168.17.25/24",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
I can connect to db via cman on racnode2:
[grid@racnode2 ~]$ sqlplus sys/Welcome1@//racnode-cman1.example.com:1521/ORCLCDB as sysdba
SQL*Plus: Release 21.0.0.0.0 - Production on Tue Mar 1 16:11:03 2022
Version 21.3.0.0.0
Copyright (c) 1982, 2021, Oracle. All rights reserved.
Connected to:
Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production
Version 21.3.0.0.0
SQL>
Here is the trace file:
[grid@racnode2 ~]$ tail -n 200 /u01/app/grid/diag/crs/racnode2/crs/trace/onmd.trc
2022-03-01 13:08:54.369 : ONMD:140347053082368: [ INFO] clssnmRcfgMgrThread: Local Join
2022-03-01 13:08:54.369 : ONMD:140347053082368: [ INFO] clssnmLocalJoinEvent: begin on node(2), waittime 193000
2022-03-01 13:08:54.369 : ONMD:140347053082368: [ INFO] clssnmLocalJoinEvent: set curtime (13541794) for my node
2022-03-01 13:08:54.369 : ONMD:140347053082368: [ INFO] clssnmLocalJoinEvent: scanning 32 nodes
2022-03-01 13:08:54.369 : ONMD:140347053082368: [ INFO] clssnmLocalJoinEvent: Node racnode1, number 1, is in an existing cluster with disk state 3
2022-03-01 13:08:54.369 : ONMD:140347053082368: [ WARNING] clssnmLocalJoinEvent: takeover aborted due to cluster member node found on disk
2022-03-01 13:08:54.369 : GIPCTLS:140347051505408: [ INFO] gipcmodTlsAuthStart: TLS HANDSHAKE - SUCCESSFUL for endp 0x7fa4a8052260 [000000000000864e] { gipcEndpoint : localAddr 'gipcha://racnode2:d355-49ad-3da1-7d50', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/3bc6-a96c-329f-49a9', numPend 2, numReady 0, numDone 1, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x55f289109900, ready 1, wobj 0x7fa4a8052ee0, sendp (nil) status 13flags 0x200b8602, flags-2 0x10, usrFlags 0x0 }
2022-03-01 13:08:54.369 : GIPCTLS:140347051505408: [ INFO] gipcmodTlsAuthStart: peerUser: NULL
2022-03-01 13:08:54.369 : GIPCTLS:140347051505408: [ INFO] gipcmodTlsAuthStart: name:CN=2ff6536af6467f6abffec4d933ce42de_7019844,O=Oracle Clusterware,
2022-03-01 13:08:54.369 : GIPCTLS:140347051505408: [ INFO] gipcmodTlsAuthStart: name:CN=2ff6536af6467f6abffec4d933ce42de_1646133336,O=Oracle_Clusterware,
2022-03-01 13:08:54.369 : GIPCTLS:140347051505408: [ INFO] gipcmodTlsAuthStart: endpoint 0x7fa4a8052260 [000000000000864e] { gipcEndpoint : localAddr 'gipcha://racnode2:d355-49ad-3da1-7d50', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/3bc6-a96c-329f-49a9', numPend 2, numReady 0, numDone 1, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x55f289109900, ready 1, wobj 0x7fa4a8052ee0, sendp (nil) status 13flags 0x200b8602, flags-2 0x10, usrFlags 0x0 }, auth state: gipcmodTlsAuthStateReady (3)
2022-03-01 13:08:54.369 : GIPCTLS:140347051505408: [ INFO] gipcmodTlsAuthReady: TLS Auth completed Successfully
2022-03-01 13:08:54.369 : ONMD:140347051505408: [ INFO] clssscSelect: conn complete ctx 0x55f28910ad30 endp 0x864e
2022-03-01 13:08:54.369 : ONMD:140347051505408: [ INFO] clssnmInitialMsg: node 1, racnode1, endp (0x7fa50000864e)
2022-03-01 13:08:54.370 : ONMD:140347051505408: [ INFO] clssnmeventhndlr: CONNCOMPLETE node(1), endp(0x864e) sending InitialMsg, conrc=2
2022-03-01 13:08:54.539 : ONMD:140347054659328: [ INFO] clssnmSendingThread: sending join msg to all nodes
2022-03-01 13:08:54.539 : ONMD:140347054659328: [ INFO] clssnmSendingThread: sent 5 join msgs to all nodes
2022-03-01 13:08:54.996 : ONMD:140347092584192: [ INFO] clssscWaitOnEventValue: after CmInfo State val 3, eval 1 waited 1000 with cvtimewait status 4294967186
2022-03-01 13:08:55.355 : ONMD:140347059402496: [ INFO] clssnmvDHBValidateNCopy: node 1, racnode1, has a disk HB, but no network HB, DHB has rcfg 541599638, wrtcnt, 6482, LATS 13542784, lastSeqNo 6481, uniqueness 1646133623, timestamp 1646140135/13542664
2022-03-01 13:08:55.370 :GIPCHAUP:140347078379264: [ INFO] gipchaUpperProcessDisconnect: processing DISCONNECT for hendp 0x7fa4a8057160 [00000000000086a7] { gipchaEndpoint : port 'd355-49ad-3da1-7d50', peer 'racnode1:nm2_racnode1-c/3bc6-a96c-329f-49a9', srcCid 00000000-000086a7, dstCid 00000000-0002ce7a, numSend 0, maxSend 100, groupListType 1, hagroup 0x55f2890e9c70, priority 0, forceAckCount 0, usrFlags 0x4000, flags 0x4204 }
2022-03-01 13:08:55.370 :GIPCHAUP:140347078379264: [ INFO] gipchaUpperMsgComplete: completing with ret gipcretConnectionLost (12), umsg 0x7fa4d80ea0a0 { msg 0x7fa4d80e4600, ret gipcretRequestPending (15), flags 0x2 }, msg 0x7fa4d80e4600 { type gipchaMsgTypeDisconnect (5), srcCid 00000000-000086a7, dstCid 00000000-00000000 } dataLen 0
2022-03-01 13:08:55.370 :GIPCGMOD:140347078379264: [ INFO] gipcmodGipcCallbackDisconnect: [gipc] Disconnect forced for endp 0x7fa4a8052260 [000000000000864e] { gipcEndpoint : localAddr 'gipcha://racnode2:d355-49ad-3da1-7d50', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/3bc6-a96c-329f-49a9', numPend 1, numReady 1, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x55f289109900, ready 1, wobj 0x7fa4a8052ee0, sendp (nil) status 0flags 0x20038606, flags-2 0x50, usrFlags 0x0 }
2022-03-01 13:08:55.370 :GIPCGMOD:140347051505408: [ INFO] gipcmodGipcCompleteRequest: [gipc] completing req 0x7fa4d80f1f80 [0000000000008705] { gipcReceiveRequest : peerName '', data (nil), len 0, olen 0, off 0, parentEndp 0x7fa4a8052260, ret gipcretConnectionLost (12), objFlags 0x0, reqFlags 0x2 }
2022-03-01 13:08:55.370 :GIPCGMOD:140347051505408: [ INFO] gipcmodGipcCompleteRecv: [gipc] Completed recv for req 0x7fa4d80f1f80 [0000000000008705] { gipcReceiveRequest : peerName '', data (nil), len 0, olen 0, off 0, parentEndp 0x7fa4a8052260, ret gipcretConnectionLost (12), objFlags 0x0, reqFlags 0x2 }
2022-03-01 13:08:55.370 : GIPCTLS:140347051505408: [ INFO] gipcmodTlsDisconnect: [tls] disconnect issued on endp 0x7fa4a8052260 [000000000000864e] { gipcEndpoint : localAddr 'gipcha://racnode2:d355-49ad-3da1-7d50', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/3bc6-a96c-329f-49a9', numPend 1, numReady 0, numDone 2, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x55f289109900, ready 1, wobj 0x7fa4a8052ee0, sendp (nil) status 0flags 0x20038606, flags-2 0x50, usrFlags 0x0 }
2022-03-01 13:08:55.370 :GIPCGMOD:140347051505408: [ INFO] gipcmodGipcDisconnect: [gipc] Issued endpoint close for endp 0x7fa4a8052260 [000000000000864e] { gipcEndpoint : localAddr 'gipcha://racnode2:d355-49ad-3da1-7d50', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/3bc6-a96c-329f-49a9', numPend 1, numReady 0, numDone 2, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x55f289109900, ready 1, wobj 0x7fa4a8052ee0, sendp (nil) status 0flags 0x20038606, flags-2 0x50, usrFlags 0x0 }
2022-03-01 13:08:55.996 : ONMD:140347092584192: [ INFO] clssscWaitOnEventValue: after CmInfo State val 3, eval 1 waited 1000 with cvtimewait status 4294967186
2022-03-01 13:08:56.356 : ONMD:140347059402496: [ INFO] clssnmvDHBValidateNCopy: node 1, racnode1, has a disk HB, but no network HB, DHB has rcfg 541599638, wrtcnt, 6483, LATS 13543784, lastSeqNo 6482, uniqueness 1646133623, timestamp 1646140136/13543694
2022-03-01 13:08:56.370 :GIPCGMOD:140347078379264: [ INFO] gipcmodGipcCallbackEndpClosed: [gipc] Endpoint close for endp 0x7fa4a8052260 [000000000000864e] { gipcEndpoint : localAddr 'gipcha://racnode2:d355-49ad-3da1-7d50', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/3bc6-a96c-329f-49a9', numPend 0, numReady 1, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x55f289109900, ready 1, wobj 0x7fa4a8052ee0, sendp (nil) status 0flags 0x2003860e, flags-2 0x50, usrFlags 0x0 }
2022-03-01 13:08:56.371 :GIPCHDEM:140347076802304: [ INFO] gipchaDaemonProcessClientReq: processing req 0x7fa4d80f5120 type gipchaClientReqTypeDeleteName (12)
2022-03-01 13:08:56.371 :GIPCGMOD:140347051505408: [ INFO] gipcmodGipcCompleteRequest: [gipc] completing req 0x7fa4a8042870 [00000000000086b3] { gipcReceiveRequest : peerName '', data (nil), len 0, olen 0, off 0, parentEndp 0x7fa4a8052260, ret gipcretConnectionLost (12), objFlags 0x0, reqFlags 0x2 }
2022-03-01 13:08:56.371 :GIPCGMOD:140347051505408: [ INFO] gipcmodGipcCompleteRecv: [gipc] Completed recv for req 0x7fa4a8042870 [00000000000086b3] { gipcReceiveRequest : peerName '', data (nil), len 0, olen 0, off 0, parentEndp 0x7fa4a8052260, ret gipcretConnectionLost (12), objFlags 0x0, reqFlags 0x2 }
2022-03-01 13:08:56.371 : ONMD:140347051505408: [ INFO] clssnmeventhndlr: Disconnecting endp 0x864e ninf 0x55f28910ad30
2022-03-01 13:08:56.371 : ONMD:140347051505408: [ INFO] clssnmDiscHelper: racnode1, node(1) connection failed, endp (0x864e), probe(0x7fa500000000), ninf->endp 0x7fa50000864e
2022-03-01 13:08:56.371 : ONMD:140347051505408: [ INFO] clssnmDiscHelper: node 1 clean up, endp (0x864e), init state 0, cur state 0
2022-03-01 13:08:56.371 :GIPCXCPT:140347051505408: [ INFO] gipcInternalDissociate: obj 0x7fa4a8052260 [000000000000864e] { gipcEndpoint : localAddr 'gipcha://racnode2:d355-49ad-3da1-7d50', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/3bc6-a96c-329f-49a9', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef (nil), ready 1, wobj 0x7fa4a8052ee0, sendp (nil) status 0flags 0x2003860e, flags-2 0x50, usrFlags 0x0 } not associated with any container, ret gipcretFail (1)
2022-03-01 13:08:56.371 :GIPCXCPT:140347051505408: [ INFO] gipcDissociateF [clssnmDiscHelper : clssnm.c : 4488]: EXCEPTION[ ret gipcretFail (1) ] failed to dissociate obj 0x7fa4a8052260 [000000000000864e] { gipcEndpoint : localAddr 'gipcha://racnode2:d355-49ad-3da1-7d50', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/3bc6-a96c-329f-49a9', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef (nil), ready 1, wobj 0x7fa4a8052ee0, sendp (nil) status 0flags 0x2003860e, flags-2 0x50, usrFlags 0x0 }, flags 0x0
2022-03-01 13:08:56.371 :GIPCXCPT:140347051505408: [ INFO] gipcInternalDissociate: obj 0x7fa4a8052260 [000000000000864e] { gipcEndpoint : localAddr 'gipcha://racnode2:d355-49ad-3da1-7d50', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/3bc6-a96c-329f-49a9', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef (nil), ready 1, wobj 0x7fa4a8052ee0, sendp (nil) status 0flags 0x2003860e, flags-2 0x50, usrFlags 0x0 } not associated with any container, ret gipcretFail (1)
2022-03-01 13:08:56.371 :GIPCXCPT:140347051505408: [ INFO] gipcDissociateF [clssnmDiscHelper : clssnm.c : 4645]: EXCEPTION[ ret gipcretFail (1) ] failed to dissociate obj 0x7fa4a8052260 [000000000000864e] { gipcEndpoint : localAddr 'gipcha://racnode2:d355-49ad-3da1-7d50', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/3bc6-a96c-329f-49a9', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef (nil), ready 1, wobj 0x7fa4a8052ee0, sendp (nil) status 0flags 0x2003860e, flags-2 0x50, usrFlags 0x0 }, flags 0x0
2022-03-01 13:08:56.371 : ONMD:140347051505408: [ INFO] clssscSelect: gipcwait returned with status gipcretPosted (17)
2022-03-01 13:08:56.371 : GIPCTLS:140347051505408: [ INFO] gipcmodTlsDisconnect: [tls] disconnect issued on endp 0x7fa4a8052260 [000000000000864e] { gipcEndpoint : localAddr 'gipcha://racnode2:d355-49ad-3da1-7d50', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/3bc6-a96c-329f-49a9', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x55f2890e9380, ready 1, wobj 0x7fa4a8052ee0, sendp (nil) status 0flags 0x2603860e, flags-2 0x50, usrFlags 0x0 }
2022-03-01 13:08:56.371 : ONMD:140347051505408: [ INFO] clssnmDiscEndp: gipcDestroy 0x864e
2022-03-01 13:08:56.371 : ONMD:140347051505408: [ INFO] clssscSelect: gipcwait returned with status gipcretPosted (17)
2022-03-01 13:08:56.996 : ONMD:140347092584192: [ INFO] clssscWaitOnEventValue: after CmInfo State val 3, eval 1 waited 1000 with cvtimewait status 4294967186
2022-03-01 13:08:57.357 : ONMD:140347059402496: [ INFO] clssnmvDHBValidateNCopy: node 1, racnode1, has a disk HB, but no network HB, DHB has rcfg 541599638, wrtcnt, 6484, LATS 13544784, lastSeqNo 6483, uniqueness 1646133623, timestamp 1646140137/13544704
2022-03-01 13:08:57.357 : ONMD:140347051505408: [ INFO] clssscSelect: gipcwait returned with status gipcretPosted (17)
2022-03-01 13:08:57.357 : ONMD:140347051505408: [ INFO] clssnmconnect: connecting to addr gipcha://racnode1:nm2_racnode1-c
2022-03-01 13:08:57.363 :GIPCHDEM:140347076802304: [ INFO] gipchaDaemonProcessClientReq: processing req 0x7fa4a803c780 type gipchaClientReqTypePublish (1)
2022-03-01 13:08:57.363 : ONMD:140347051505408: [ INFO] clssscConnect: endp 0x8729 - cookie 0x55f28910ad30 - addr gipcha://racnode1:nm2_racnode1-c
2022-03-01 13:08:57.363 : ONMD:140347051505408: [ INFO] clssnmconnect: connecting to node(1), endp(0x8729), flags 0x10002
2022-03-01 13:08:57.363 :GIPCHTHR:140347078379264: [ INFO] gipchaWorkerProcessClientConnect: starting resolve from connect for host:racnode1, port:nm2_racnode1-c, cookie:0x7fa4a803c780
2022-03-01 13:08:57.363 :GIPCHDEM:140347076802304: [ INFO] gipchaDaemonProcessClientReq: processing req 0x7fa4d80f5120 type gipchaClientReqTypeResolve (4)
2022-03-01 13:08:57.364 :GIPCHDEM:140347076802304: [ INFO] gipchaDaemonCreateResolveResponse: creating resolveResponse for host:racnode1, port:nm2_racnode1-c, haname:8764-6925-ffe7-13cb, ret:0
2022-03-01 13:08:57.364 :GIPCHAUP:140347078379264: [ INFO] gipchaUpperConnect: initiated connect for umsg 0x7fa4d80ea0a0 { msg 0x7fa4d80e48f0, ret gipcretRequestPending (15), flags 0x6 }, msg 0x7fa4d80e48f0 { type gipchaMsgTypeConnect (3), srcPort '0a5f-d359-4cee-fff4', dstPort 'nm2_racnode1-c', srcCid 00000000-00008782, cookie 00007fa4-d80ea0a0 } dataLen 0, endp 0x7fa4a80568c0 [0000000000008782] { gipchaEndpoint : port '0a5f-d359-4cee-fff4', peer ':', srcCid 00000000-00008782, dstCid 00000000-00000000, numSend 0, maxSend 100, groupListType 1, hagroup 0x55f2890e9c70, priority 0, forceAckCount 0, usrFlags 0x4000, flags 0x0 } node 0x7fa4cc0d12a0 { host 'racnode1', haName '8764-6925-ffe7-13cb', srcLuid 7b00be91-e82deec0, dstLuid bcc7bd2e-572238ad numInf 1, sentRegister 1, localMonitor 0, baseStream 0x7fa4cc0b12f0 type gipchaNodeType12001 (20), nodeIncarnation 0be8266e-006b3202, incarnation 2, cssIncarnation 0, negDigest 7, roundTripTime 4294967295 lastSeenPingAck 0 nextPingId 1 latencySrc 0 latencyDst 0 flags 0xe10680c}
2022-03-01 13:08:57.364 :GIPCHAUP:140347078379264: [ INFO] gipchaUpperCallbackConnect: completed CONNECT:SEND umsg 0x7fa4d80ea0a0 { msg 0x7fa4d80e48f0, ret gipcretSuccess (0), flags 0xe }, msg 0x7fa4d80e48f0 { type gipchaMsgTypeConnect (3), srcPort '0a5f-d359-4cee-fff4', dstPort 'nm2_racnode1-c', srcCid 00000000-00008782, cookie 00007fa4-d80ea0a0 } dataLen 0, hendp 0x7fa4a80568c0 [0000000000008782] { gipchaEndpoint : port '0a5f-d359-4cee-fff4', peer ':', srcCid 00000000-00008782, dstCid 00000000-00000000, numSend 0, maxSend 100, groupListType 1, hagroup 0x55f2890e9c70, priority 0, forceAckCount 0, usrFlags 0x4000, flags 0x0 }
2022-03-01 13:08:57.364 :GIPCHAUP:140347078379264: [ INFO] gipchaUpperProcessConnectAck: CONNACK completed umsg 0x7fa4d80ea0a0 { msg 0x7fa4d80e48f0, ret gipcretSuccess (0), flags 0xe }, msg 0x7fa4d80e48f0 { type gipchaMsgTypeConnect (3), srcPort '0a5f-d359-4cee-fff4', dstPort 'nm2_racnode1-c', srcCid 00000000-00008782, cookie 00007fa4-d80ea0a0 } dataLen 0, hendp 0x7fa4a80568c0 [0000000000008782] { gipchaEndpoint : port '0a5f-d359-4cee-fff4', peer 'racnode1:nm2_racnode1-c/5494-ef9f-5bfb-7fbe', srcCid 00000000-00008782, dstCid 00000000-0002cf32, numSend 0, maxSend 100, groupListType 1, hagroup 0x55f2890e9c70, priority 0, forceAckCount 0, usrFlags 0x4000, flags 0x204 } node 0x7fa4cc0d12a0 { host 'racnode1', haName '8764-6925-ffe7-13cb', srcLuid 7b00be91-e82deec0, dstLuid bcc7bd2e-572238ad numInf 1, sentRegister 1, localMonitor 0, baseStream 0x7fa4cc0b12f0 type gipchaNodeType12001 (20), nodeIncarnation 0be8266e-006b3202, incarnation 2, cssIncarnation 0, negDigest 7, roundTripTime 4294967295 lastSeenPingAck 0 nextPingId 1 latencySrc 0 latencyDst 0 flags 0xe10680c}
2022-03-01 13:08:57.365 :GIPCGMOD:140347051505408: [ INFO] gipcmodGipcCompleteConnect: [gipc] completed connect on endp 0x7fa4a8056380 [0000000000008729] { gipcEndpoint : localAddr 'gipcha://racnode2:0a5f-d359-4cee-fff4', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/5494-ef9f-5bfb-7fbe', numPend 1, numReady 1, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x55f289109900, ready 0, wobj 0x7fa4a8053150, sendp (nil) status 13flags 0x200b8602, flags-2 0x10, usrFlags 0x0 }
2022-03-01 13:08:57.365 : GIPCTLS:140347051505408: [ INFO] gipcmodTlsAuthInit: creating connection context ...
2022-03-01 13:08:57.365 : GIPCTLS:140347051505408: [ INFO] gipcmodTlsAuthInit: tls context initialized successfully
2022-03-01 13:08:57.372 : GIPCTLS:140347051505408: [ INFO] gipcmodTlsAuthStart: TLS HANDSHAKE - SUCCESSFUL for endp 0x7fa4a8056380 [0000000000008729] { gipcEndpoint : localAddr 'gipcha://racnode2:0a5f-d359-4cee-fff4', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/5494-ef9f-5bfb-7fbe', numPend 2, numReady 0, numDone 1, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x55f289109900, ready 1, wobj 0x7fa4a8053150, sendp (nil) status 13flags 0x200b8602, flags-2 0x10, usrFlags 0x0 }
2022-03-01 13:08:57.372 : GIPCTLS:140347051505408: [ INFO] gipcmodTlsAuthStart: peerUser: NULL
2022-03-01 13:08:57.372 : GIPCTLS:140347051505408: [ INFO] gipcmodTlsAuthStart: name:CN=2ff6536af6467f6abffec4d933ce42de_7019844,O=Oracle Clusterware,
2022-03-01 13:08:57.372 : GIPCTLS:140347051505408: [ INFO] gipcmodTlsAuthStart: name:CN=2ff6536af6467f6abffec4d933ce42de_1646133336,O=Oracle_Clusterware,
2022-03-01 13:08:57.372 : GIPCTLS:140347051505408: [ INFO] gipcmodTlsAuthStart: endpoint 0x7fa4a8056380 [0000000000008729] { gipcEndpoint : localAddr 'gipcha://racnode2:0a5f-d359-4cee-fff4', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/5494-ef9f-5bfb-7fbe', numPend 2, numReady 0, numDone 1, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x55f289109900, ready 1, wobj 0x7fa4a8053150, sendp (nil) status 13flags 0x200b8602, flags-2 0x10, usrFlags 0x0 }, auth state: gipcmodTlsAuthStateReady (3)
2022-03-01 13:08:57.372 : GIPCTLS:140347051505408: [ INFO] gipcmodTlsAuthReady: TLS Auth completed Successfully
2022-03-01 13:08:57.372 : ONMD:140347051505408: [ INFO] clssscSelect: conn complete ctx 0x55f28910ad30 endp 0x8729
2022-03-01 13:08:57.372 : ONMD:140347051505408: [ INFO] clssnmInitialMsg: node 1, racnode1, endp (0x7fa500008729)
2022-03-01 13:08:57.372 : ONMD:140347051505408: [ INFO] clssnmeventhndlr: CONNCOMPLETE node(1), endp(0x8729) sending InitialMsg, conrc=2
2022-03-01 13:08:57.996 : ONMD:140347092584192: [ INFO] clssscWaitOnEventValue: after CmInfo State val 3, eval 1 waited 1000 with cvtimewait status 4294967186
2022-03-01 13:08:58.358 : ONMD:140347059402496: [ INFO] clssnmvDHBValidateNCopy: node 1, racnode1, has a disk HB, but no network HB, DHB has rcfg 541599638, wrtcnt, 6485, LATS 13545784, lastSeqNo 6484, uniqueness 1646133623, timestamp 1646140138/13545714
2022-03-01 13:08:58.373 :GIPCHAUP:140347078379264: [ INFO] gipchaUpperProcessDisconnect: processing DISCONNECT for hendp 0x7fa4a80568c0 [0000000000008782] { gipchaEndpoint : port '0a5f-d359-4cee-fff4', peer 'racnode1:nm2_racnode1-c/5494-ef9f-5bfb-7fbe', srcCid 00000000-00008782, dstCid 00000000-0002cf32, numSend 0, maxSend 100, groupListType 1, hagroup 0x55f2890e9c70, priority 0, forceAckCount 0, usrFlags 0x4000, flags 0x4204 }
2022-03-01 13:08:58.373 :GIPCHAUP:140347078379264: [ INFO] gipchaUpperMsgComplete: completing with ret gipcretConnectionLost (12), umsg 0x7fa4d80d9380 { msg 0x7fa4d80e50a0, ret gipcretRequestPending (15), flags 0x2 }, msg 0x7fa4d80e50a0 { type gipchaMsgTypeDisconnect (5), srcCid 00000000-00008782, dstCid 00000000-00000000 } dataLen 0
2022-03-01 13:08:58.373 :GIPCGMOD:140347078379264: [ INFO] gipcmodGipcCallbackDisconnect: [gipc] Disconnect forced for endp 0x7fa4a8056380 [0000000000008729] { gipcEndpoint : localAddr 'gipcha://racnode2:0a5f-d359-4cee-fff4', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/5494-ef9f-5bfb-7fbe', numPend 1, numReady 1, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x55f289109900, ready 1, wobj 0x7fa4a8053150, sendp (nil) status 0flags 0x20038606, flags-2 0x50, usrFlags 0x0 }
2022-03-01 13:08:58.373 :GIPCGMOD:140347051505408: [ INFO] gipcmodGipcCompleteRequest: [gipc] completing req 0x7fa4d80e51c0 [00000000000087d9] { gipcReceiveRequest : peerName '', data (nil), len 0, olen 0, off 0, parentEndp 0x7fa4a8056380, ret gipcretConnectionLost (12), objFlags 0x0, reqFlags 0x2 }
2022-03-01 13:08:58.373 :GIPCGMOD:140347051505408: [ INFO] gipcmodGipcCompleteRecv: [gipc] Completed recv for req 0x7fa4d80e51c0 [00000000000087d9] { gipcReceiveRequest : peerName '', data (nil), len 0, olen 0, off 0, parentEndp 0x7fa4a8056380, ret gipcretConnectionLost (12), objFlags 0x0, reqFlags 0x2 }
2022-03-01 13:08:58.373 : GIPCTLS:140347051505408: [ INFO] gipcmodTlsDisconnect: [tls] disconnect issued on endp 0x7fa4a8056380 [0000000000008729] { gipcEndpoint : localAddr 'gipcha://racnode2:0a5f-d359-4cee-fff4', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/5494-ef9f-5bfb-7fbe', numPend 1, numReady 0, numDone 2, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x55f289109900, ready 1, wobj 0x7fa4a8053150, sendp (nil) status 0flags 0x20038606, flags-2 0x50, usrFlags 0x0 }
2022-03-01 13:08:58.373 :GIPCGMOD:140347051505408: [ INFO] gipcmodGipcDisconnect: [gipc] Issued endpoint close for endp 0x7fa4a8056380 [0000000000008729] { gipcEndpoint : localAddr 'gipcha://racnode2:0a5f-d359-4cee-fff4', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/5494-ef9f-5bfb-7fbe', numPend 1, numReady 0, numDone 2, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x55f289109900, ready 1, wobj 0x7fa4a8053150, sendp (nil) status 0flags 0x20038606, flags-2 0x50, usrFlags 0x0 }
2022-03-01 13:08:58.996 : ONMD:140347092584192: [ INFO] clssscWaitOnEventValue: after CmInfo State val 3, eval 1 waited 1000 with cvtimewait status 4294967186
2022-03-01 13:08:59.359 : ONMD:140347059402496: [ INFO] clssnmvDHBValidateNCopy: node 1, racnode1, has a disk HB, but no network HB, DHB has rcfg 541599638, wrtcnt, 6486, LATS 13546784, lastSeqNo 6485, uniqueness 1646133623, timestamp 1646140139/13546744
2022-03-01 13:08:59.373 :GIPCGMOD:140347078379264: [ INFO] gipcmodGipcCallbackEndpClosed: [gipc] Endpoint close for endp 0x7fa4a8056380 [0000000000008729] { gipcEndpoint : localAddr 'gipcha://racnode2:0a5f-d359-4cee-fff4', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/5494-ef9f-5bfb-7fbe', numPend 0, numReady 1, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x55f289109900, ready 1, wobj 0x7fa4a8053150, sendp (nil) status 0flags 0x2003860e, flags-2 0x50, usrFlags 0x0 }
2022-03-01 13:08:59.373 :GIPCHDEM:140347076802304: [ INFO] gipchaDaemonProcessClientReq: processing req 0x7fa4d80f79c0 type gipchaClientReqTypeDeleteName (12)
2022-03-01 13:08:59.373 :GIPCGMOD:140347051505408: [ INFO] gipcmodGipcCompleteRequest: [gipc] completing req 0x7fa4a8041e80 [000000000000878c] { gipcReceiveRequest : peerName '', data (nil), len 0, olen 0, off 0, parentEndp 0x7fa4a8056380, ret gipcretConnectionLost (12), objFlags 0x0, reqFlags 0x2 }
2022-03-01 13:08:59.373 :GIPCGMOD:140347051505408: [ INFO] gipcmodGipcCompleteRecv: [gipc] Completed recv for req 0x7fa4a8041e80 [000000000000878c] { gipcReceiveRequest : peerName '', data (nil), len 0, olen 0, off 0, parentEndp 0x7fa4a8056380, ret gipcretConnectionLost (12), objFlags 0x0, reqFlags 0x2 }
2022-03-01 13:08:59.374 : ONMD:140347051505408: [ INFO] clssnmeventhndlr: Disconnecting endp 0x8729 ninf 0x55f28910ad30
2022-03-01 13:08:59.374 : ONMD:140347051505408: [ INFO] clssnmDiscHelper: racnode1, node(1) connection failed, endp (0x8729), probe(0x7fa500000000), ninf->endp 0x7fa500008729
2022-03-01 13:08:59.374 : ONMD:140347051505408: [ INFO] clssnmDiscHelper: node 1 clean up, endp (0x8729), init state 0, cur state 0
2022-03-01 13:08:59.374 :GIPCXCPT:140347051505408: [ INFO] gipcInternalDissociate: obj 0x7fa4a8056380 [0000000000008729] { gipcEndpoint : localAddr 'gipcha://racnode2:0a5f-d359-4cee-fff4', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/5494-ef9f-5bfb-7fbe', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef (nil), ready 1, wobj 0x7fa4a8053150, sendp (nil) status 0flags 0x2003860e, flags-2 0x50, usrFlags 0x0 } not associated with any container, ret gipcretFail (1)
2022-03-01 13:08:59.374 :GIPCXCPT:140347051505408: [ INFO] gipcDissociateF [clssnmDiscHelper : clssnm.c : 4488]: EXCEPTION[ ret gipcretFail (1) ] failed to dissociate obj 0x7fa4a8056380 [0000000000008729] { gipcEndpoint : localAddr 'gipcha://racnode2:0a5f-d359-4cee-fff4', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/5494-ef9f-5bfb-7fbe', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef (nil), ready 1, wobj 0x7fa4a8053150, sendp (nil) status 0flags 0x2003860e, flags-2 0x50, usrFlags 0x0 }, flags 0x0
2022-03-01 13:08:59.374 :GIPCXCPT:140347051505408: [ INFO] gipcInternalDissociate: obj 0x7fa4a8056380 [0000000000008729] { gipcEndpoint : localAddr 'gipcha://racnode2:0a5f-d359-4cee-fff4', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/5494-ef9f-5bfb-7fbe', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef (nil), ready 1, wobj 0x7fa4a8053150, sendp (nil) status 0flags 0x2003860e, flags-2 0x50, usrFlags 0x0 } not associated with any container, ret gipcretFail (1)
2022-03-01 13:08:59.374 :GIPCXCPT:140347051505408: [ INFO] gipcDissociateF [clssnmDiscHelper : clssnm.c : 4645]: EXCEPTION[ ret gipcretFail (1) ] failed to dissociate obj 0x7fa4a8056380 [0000000000008729] { gipcEndpoint : localAddr 'gipcha://racnode2:0a5f-d359-4cee-fff4', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/5494-ef9f-5bfb-7fbe', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef (nil), ready 1, wobj 0x7fa4a8053150, sendp (nil) status 0flags 0x2003860e, flags-2 0x50, usrFlags 0x0 }, flags 0x0
2022-03-01 13:08:59.374 : ONMD:140347051505408: [ INFO] clssscSelect: gipcwait returned with status gipcretPosted (17)
2022-03-01 13:08:59.374 : ONMD:140347051505408: [ INFO] clssscSelect: gipcwait returned with status gipcretPosted (17)
2022-03-01 13:08:59.374 : GIPCTLS:140347051505408: [ INFO] gipcmodTlsDisconnect: [tls] disconnect issued on endp 0x7fa4a8056380 [0000000000008729] { gipcEndpoint : localAddr 'gipcha://racnode2:0a5f-d359-4cee-fff4', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/5494-ef9f-5bfb-7fbe', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x55f2890e9380, ready 1, wobj 0x7fa4a8053150, sendp (nil) status 0flags 0x2603860e, flags-2 0x50, usrFlags 0x0 }
2022-03-01 13:08:59.374 : ONMD:140347051505408: [ INFO] clssnmDiscEndp: gipcDestroy 0x8729
2022-03-01 13:08:59.374 : ONMD:140347051505408: [ INFO] clssscSelect: gipcwait returned with status gipcretPosted (17)
2022-03-01 13:08:59.544 : ONMD:140347054659328: [ INFO] clssnmSendingThread: sending join msg to all nodes
2022-03-01 13:08:59.544 : ONMD:140347054659328: [ INFO] clssnmSendingThread: sent 5 join msgs to all nodes
2022-03-01 13:08:59.996 : ONMD:140347092584192: [ INFO] clssscWaitOnEventValue: after CmInfo State val 3, eval 1 waited 1000 with cvtimewait status 4294967186
2022-03-01 13:09:00.360 : ONMD:140347059402496: [ INFO] clssnmvDHBValidateNCopy: node 1, racnode1, has a disk HB, but no network HB, DHB has rcfg 541599638, wrtcnt, 6487, LATS 13547784, lastSeqNo 6486, uniqueness 1646133623, timestamp 1646140140/13547764
2022-03-01 13:09:00.360 : ONMD:140347051505408: [ INFO] clssscSelect: gipcwait returned with status gipcretPosted (17)
2022-03-01 13:09:00.360 : ONMD:140347051505408: [ INFO] clssnmconnect: connecting to addr gipcha://racnode1:nm2_racnode1-c
2022-03-01 13:09:00.366 :GIPCHDEM:140347076802304: [ INFO] gipchaDaemonProcessClientReq: processing req 0x7fa4a8047b70 type gipchaClientReqTypePublish (1)
2022-03-01 13:09:00.367 : ONMD:140347051505408: [ INFO] clssscConnect: endp 0x8801 - cookie 0x55f28910ad30 - addr gipcha://racnode1:nm2_racnode1-c
2022-03-01 13:09:00.367 : ONMD:140347051505408: [ INFO] clssnmconnect: connecting to node(1), endp(0x8801), flags 0x10002
2022-03-01 13:09:00.367 :GIPCHTHR:140347078379264: [ INFO] gipchaWorkerProcessClientConnect: starting resolve from connect for host:racnode1, port:nm2_racnode1-c, cookie:0x7fa4a8047b70
2022-03-01 13:09:00.367 :GIPCHDEM:140347076802304: [ INFO] gipchaDaemonProcessClientReq: processing req 0x7fa4d80f79c0 type gipchaClientReqTypeResolve (4)
2022-03-01 13:09:00.367 :GIPCHDEM:140347076802304: [ INFO] gipchaDaemonCreateResolveResponse: creating resolveResponse for host:racnode1, port:nm2_racnode1-c, haname:8764-6925-ffe7-13cb, ret:0
2022-03-01 13:09:00.367 :GIPCHAUP:140347078379264: [ INFO] gipchaUpperConnect: initiated connect for umsg 0x7fa4d80ca120 { msg 0x7fa4d80e4da0, ret gipcretRequestPending (15), flags 0x6 }, msg 0x7fa4d80e4da0 { type gipchaMsgTypeConnect (3), srcPort '360b-2c8a-112c-67e0', dstPort 'nm2_racnode1-c', srcCid 00000000-0000885a, cookie 00007fa4-d80ca120 } dataLen 0, endp 0x7fa4a8057be0 [000000000000885a] { gipchaEndpoint : port '360b-2c8a-112c-67e0', peer ':', srcCid 00000000-0000885a, dstCid 00000000-00000000, numSend 0, maxSend 100, groupListType 1, hagroup 0x55f2890e9c70, priority 0, forceAckCount 0, usrFlags 0x4000, flags 0x0 } node 0x7fa4cc0d12a0 { host 'racnode1', haName '8764-6925-ffe7-13cb', srcLuid 7b00be91-e82deec0, dstLuid bcc7bd2e-572238ad numInf 1, sentRegister 1, localMonitor 0, baseStream 0x7fa4cc0b12f0 type gipchaNodeType12001 (20), nodeIncarnation 0be8266e-006b3202, incarnation 2, cssIncarnation 0, negDigest 7, roundTripTime 4294967295 lastSeenPingAck 0 nextPingId 1 latencySrc 0 latencyDst 0 flags 0xe10680c}
2022-03-01 13:09:00.367 :GIPCHAUP:140347078379264: [ INFO] gipchaUpperCallbackConnect: completed CONNECT:SEND umsg 0x7fa4d80ca120 { msg 0x7fa4d80e4da0, ret gipcretSuccess (0), flags 0xe }, msg 0x7fa4d80e4da0 { type gipchaMsgTypeConnect (3), srcPort '360b-2c8a-112c-67e0', dstPort 'nm2_racnode1-c', srcCid 00000000-0000885a, cookie 00007fa4-d80ca120 } dataLen 0, hendp 0x7fa4a8057be0 [000000000000885a] { gipchaEndpoint : port '360b-2c8a-112c-67e0', peer ':', srcCid 00000000-0000885a, dstCid 00000000-00000000, numSend 0, maxSend 100, groupListType 1, hagroup 0x55f2890e9c70, priority 0, forceAckCount 0, usrFlags 0x4000, flags 0x0 }
2022-03-01 13:09:00.368 :GIPCHAUP:140347078379264: [ INFO] gipchaUpperProcessConnectAck: CONNACK completed umsg 0x7fa4d80ca120 { msg 0x7fa4d80e4da0, ret gipcretSuccess (0), flags 0xe }, msg 0x7fa4d80e4da0 { type gipchaMsgTypeConnect (3), srcPort '360b-2c8a-112c-67e0', dstPort 'nm2_racnode1-c', srcCid 00000000-0000885a, cookie 00007fa4-d80ca120 } dataLen 0, hendp 0x7fa4a8057be0 [000000000000885a] { gipchaEndpoint : port '360b-2c8a-112c-67e0', peer 'racnode1:nm2_racnode1-c/815c-7b12-d9d8-945b', srcCid 00000000-0000885a, dstCid 00000000-0002cff3, numSend 0, maxSend 100, groupListType 1, hagroup 0x55f2890e9c70, priority 0, forceAckCount 0, usrFlags 0x4000, flags 0x204 } node 0x7fa4cc0d12a0 { host 'racnode1', haName '8764-6925-ffe7-13cb', srcLuid 7b00be91-e82deec0, dstLuid bcc7bd2e-572238ad numInf 1, sentRegister 1, localMonitor 0, baseStream 0x7fa4cc0b12f0 type gipchaNodeType12001 (20), nodeIncarnation 0be8266e-006b3202, incarnation 2, cssIncarnation 0, negDigest 7, roundTripTime 4294967295 lastSeenPingAck 0 nextPingId 1 latencySrc 0 latencyDst 0 flags 0xe10680c}
2022-03-01 13:09:00.368 :GIPCGMOD:140347051505408: [ INFO] gipcmodGipcCompleteConnect: [gipc] completed connect on endp 0x7fa4a8051f00 [0000000000008801] { gipcEndpoint : localAddr 'gipcha://racnode2:360b-2c8a-112c-67e0', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/815c-7b12-d9d8-945b', numPend 1, numReady 1, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x55f289109900, ready 0, wobj 0x7fa4a8042bc0, sendp (nil) status 13flags 0x200b8602, flags-2 0x10, usrFlags 0x0 }
2022-03-01 13:09:00.368 : GIPCTLS:140347051505408: [ INFO] gipcmodTlsAuthInit: creating connection context ...
2022-03-01 13:09:00.368 : GIPCTLS:140347051505408: [ INFO] gipcmodTlsAuthInit: tls context initialized successfully
2022-03-01 13:09:00.374 : GIPCTLS:140347051505408: [ INFO] gipcmodTlsAuthStart: TLS HANDSHAKE - SUCCESSFUL for endp 0x7fa4a8051f00 [0000000000008801] { gipcEndpoint : localAddr 'gipcha://racnode2:360b-2c8a-112c-67e0', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/815c-7b12-d9d8-945b', numPend 2, numReady 0, numDone 1, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x55f289109900, ready 1, wobj 0x7fa4a8042bc0, sendp (nil) status 13flags 0x200b8602, flags-2 0x10, usrFlags 0x0 }
2022-03-01 13:09:00.374 : GIPCTLS:140347051505408: [ INFO] gipcmodTlsAuthStart: peerUser: NULL
2022-03-01 13:09:00.374 : GIPCTLS:140347051505408: [ INFO] gipcmodTlsAuthStart: name:CN=2ff6536af6467f6abffec4d933ce42de_7019844,O=Oracle Clusterware,
2022-03-01 13:09:00.374 : GIPCTLS:140347051505408: [ INFO] gipcmodTlsAuthStart: name:CN=2ff6536af6467f6abffec4d933ce42de_1646133336,O=Oracle_Clusterware,
2022-03-01 13:09:00.374 : GIPCTLS:140347051505408: [ INFO] gipcmodTlsAuthStart: endpoint 0x7fa4a8051f00 [0000000000008801] { gipcEndpoint : localAddr 'gipcha://racnode2:360b-2c8a-112c-67e0', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/815c-7b12-d9d8-945b', numPend 2, numReady 0, numDone 1, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x55f289109900, ready 1, wobj 0x7fa4a8042bc0, sendp (nil) status 13flags 0x200b8602, flags-2 0x10, usrFlags 0x0 }, auth state: gipcmodTlsAuthStateReady (3)
2022-03-01 13:09:00.374 : GIPCTLS:140347051505408: [ INFO] gipcmodTlsAuthReady: TLS Auth completed Successfully
2022-03-01 13:09:00.374 : ONMD:140347051505408: [ INFO] clssscSelect: conn complete ctx 0x55f28910ad30 endp 0x8801
2022-03-01 13:09:00.374 : ONMD:140347051505408: [ INFO] clssnmInitialMsg: node 1, racnode1, endp (0x7fa500008801)
2022-03-01 13:09:00.374 : ONMD:140347051505408: [ INFO] clssnmeventhndlr: CONNCOMPLETE node(1), endp(0x8801) sending InitialMsg, conrc=2
2022-03-01 13:09:00.804 : ONMD:140347102041856: [ INFO] clsscssd_ReadyBussTimeout_CB: GMCD ready for business timedout.Exiting.
2022-03-01 13:09:00.804 : ONMD:140347102041856: [ INFO] clssscExit: Reason for exit: Init Shutdown. Now calling the respective exit function.
2022-03-01 13:09:00.804 : ONMD:140347102041856: [ INFO] (:CSSSC00011:)clsscssdcmExit: A fatal error occurred during initialization
2022-03-01 13:09:00.805 : ONMD:140347102041856: [ INFO] clssnmCheckForNetworkFailure: Entered
2022-03-01 13:09:00.805 : ONMD:140347102041856: [ INFO] clssnmCheckForNetworkFailure: skipping 0 defined 0
2022-03-01 13:09:00.805 : ONMD:140347102041856: [ INFO] clssnmCheckForNetworkFailure: expiring 1 evicted 0 evicting node 0 this node 1
2022-03-01 13:09:00.805 : ONMD:140347102041856: [ INFO] clssnmCheckForNetworkFailure: network failure
2022-03-01 13:09:00.805 : ONMD:140347102041856: [ INFO] clsscssdcmSendNlsMsgToGMCDFromQ: sending NLS msgid =1609
2022-03-01 13:09:00.805 : ONMD:140347102041856: [ INFO] clssscSendToLocalBCCTL: msgtype 2 foundpipe TRUE
2022-03-01 13:09:00.805 : ONMD:140347102041856: [ INFO] clssscSendToLocalBCCTL: Sent a msg type 2
2022-03-01 13:09:00.806 : ONMD:140347098887936: [ INFO] clssscServerBCCMHandler: send complete for type 2
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] clssscShmSetKey: key set = css.nls.data successfully
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] clsscssdcmExit: Call to clscal flush successful and clearing the CLSSSCCTX_INIT_CALOG flag so that no further CA logging happens
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] ####### Begin Diagnostic Dump #######
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] ### Begin diagnostic data for the Core layer ###
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] clssscCheckInitCmpl: Initialization state clssscInitNODENUM (0x00000001) set
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] clssscCheckInitCmpl: Initialization state clssscInitMAINCTX_DONE (0x00000002) set
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] clssscCheckInitCmpl: Initialization state clssscInitPROF_PARMS (0x00000004) set
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] clssscCheckInitCmpl: Initialization state clssscInitSKGXN_DONE (0x00000008) set
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] clssscCheckInitCmpl: Initialization state clssscInitGMP_ENDPT (0x00000020) set
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] clssscCheckInitCmpl: Initialization state clssscInitNM_MIN (0x00000040) set
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] clssscCheckInitCmpl: Initialization state clssscInitNM_COMPL (0x00000100) set
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] clssscCheckInitCmpl: Initialization state clssscInitGMP_MIN (0x00000200) set
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] clssscCheckInitCmpl: Initialization state clssscInitBNMS_COMPL (0x00000400) set
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] clssscCheckInitCmpl: Initialization state clssscInitALARM_DONE (0x00001000) set
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] clssscCheckInitCmpl: Initialization state clssscInitCTRL_COMPL (0x00002000) set
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] clssscCheckInitCmpl: Initialization state clssscInitFRST_RCFG (0x00004000) not set
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] clssscCheckInitCmpl: Initialization state clssscInitHAVE_DBINFO (0x00020000) not set
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] clssscCheckInitCmpl: Initialization state clssscInitGNS_READY (0x00040000) set
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] clssscCheckInitCmpl: Initialization state clssscInitHAVE_ICIN (0x00200000) not set
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] clssscCheckInitCmpl: Initialization state clssscInitACTTHRD_DONE (0x00800000) set
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] clssscCheckInitCmpl: Initialization state clssscInitOPENBUSS (0x01000000) not set
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] clssscCheckInitCmpl: Initialization state clssscInitBCCM_COMPL (0x02000000) set
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] clssscCheckInitCmpl: Initialization state clssscInitCOMPLETE (0x20000000) set
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] Initialization not complete !Error!
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] #### End diagnostic data for the Core layer ####
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] ### Begin diagnostic data for the GM Peer layer ###
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] GMP Status: State CMStateINIT, incarnation 0, holding incoming requests 0
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] Status for active hub node racnode2, number 2:
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] Connect: Started 1 completed 1 Ready 1 Fully Connected 0 !Error!
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] #### End diagnostic data for the GM Peer layer ####
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] ### Begin diagnostic data for the NM layer ###
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] Local node racnode2, number 2, state is clssnmNodeStateJOINING
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] Status for node racnode1, number 1, uniqueness 1646133623, node ID 0
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] State clssnmNodeStateINACTIVE, Connect: started 1 completed 0 OK
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] Status for node racnode2, number 2, uniqueness 1646139660, node ID 0
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] State clssnmNodeStateJOINING, Connect: started 1 completed 1 OK
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] #### End diagnostic data for the NM layer ####
2022-03-01 13:09:00.806 : ONMD:140347102041856: [ INFO] ######## End Diagnostic Dump ########
2022-03-01 13:09:00.807 : ONMD:140347102041856: [ INFO] clsscssdcmExit: Status: 4, Abort flag: 0, Core flag: 0, Don't abort: 0, flag: 112
2022-03-01 13:09:00.807 : ONMD:140347102041856: scls_dump_stack_all_threads - entry
2022-03-01 13:09:00.807 : ONMD:140347102041856: scls_dump_stack_all_threads - stat of /usr/bin/gdb failed with errno 2
2022-03-01 13:09:00.807 : ONMD:140347102041856: [ INFO] clsscssdcmExit: Now aborting
CLSB:140347102041856: [ ERROR] Oracle Clusterware infrastructure error in ONMD (OS PID 19446): Fatal signal 6 has occurred in program onmd thread 140347102041856; nested signal count is 1
Trace file /u01/app/grid/diag/crs/racnode2/crs/trace/onmd.trc
Oracle Database 21c Clusterware Release 21.0.0.0.0 - Production
Version 21.3.0.0.0 Copyright 1996, 2021 Oracle. All rights reserved.
DDE: Flood control is not active
2022-03-01T13:09:00.821785+00:00
Incident 1 created, dump file: /u01/app/grid/diag/crs/racnode2/crs/incident/incdir_1/onmd_i1.trc
CRS-8503 [] [] [] [] [] [] [] [] [] [] [] []
2022-03-01 13:09:00.996 : ONMD:140347092584192: [ INFO] clssscWaitOnEventValue: after CmInfo State val 3, eval 1 waited 1000 with cvtimewait status 4294967186
2022-03-01 13:09:01.361 : ONMD:140347059402496: [ INFO] clssnmvDHBValidateNCopy: node 1, racnode1, has a disk HB, but no network HB, DHB has rcfg 541599638, wrtcnt, 6488, LATS 13548784, lastSeqNo 6487, uniqueness 1646133623, timestamp 1646140141/13548764
2022-03-01 13:09:01.374 : ONMD:140347053082368: [ INFO] clssnmRcfgMgrThread: Local Join
2022-03-01 13:09:01.374 : ONMD:140347053082368: [ INFO] clssnmLocalJoinEvent: begin on node(2), waittime 193000
2022-03-01 13:09:01.374 : ONMD:140347053082368: [ INFO] clssnmLocalJoinEvent: set curtime (13548794) for my node
2022-03-01 13:09:01.374 : ONMD:140347053082368: [ INFO] clssnmLocalJoinEvent: scanning 32 nodes
2022-03-01 13:09:01.374 : ONMD:140347053082368: [ INFO] clssnmLocalJoinEvent: Node racnode1, number 1, is in an existing cluster with disk state 3
2022-03-01 13:09:01.374 : ONMD:140347053082368: [ WARNING] clssnmLocalJoinEvent: takeover aborted due to cluster member node found on disk
2022-03-01 13:09:01.375 :GIPCHAUP:140347078379264: [ INFO] gipchaUpperProcessDisconnect: processing DISCONNECT for hendp 0x7fa4a8057be0 [000000000000885a] { gipchaEndpoint : port '360b-2c8a-112c-67e0', peer 'racnode1:nm2_racnode1-c/815c-7b12-d9d8-945b', srcCid 00000000-0000885a, dstCid 00000000-0002cff3, numSend 0, maxSend 100, groupListType 1, hagroup 0x55f2890e9c70, priority 0, forceAckCount 0, usrFlags 0x4000, flags 0x4204 }
2022-03-01 13:09:01.375 :GIPCHAUP:140347078379264: [ INFO] gipchaUpperMsgComplete: completing with ret gipcretConnectionLost (12), umsg 0x7fa4d80ea0a0 { msg 0x7fa4d80e5ad0, ret gipcretRequestPending (15), flags 0x2 }, msg 0x7fa4d80e5ad0 { type gipchaMsgTypeDisconnect (5), srcCid 00000000-0000885a, dstCid 00000000-00000000 } dataLen 0
2022-03-01 13:09:01.375 :GIPCGMOD:140347078379264: [ INFO] gipcmodGipcCallbackDisconnect: [gipc] Disconnect forced for endp 0x7fa4a8051f00 [0000000000008801] { gipcEndpoint : localAddr 'gipcha://racnode2:360b-2c8a-112c-67e0', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/815c-7b12-d9d8-945b', numPend 1, numReady 1, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x55f289109900, ready 1, wobj 0x7fa4a8042bc0, sendp (nil) status 0flags 0x20038606, flags-2 0x50, usrFlags 0x0 }
2022-03-01 13:09:01.375 :GIPCGMOD:140347051505408: [ INFO] gipcmodGipcCompleteRequest: [gipc] completing req 0x7fa4d80e5bf0 [00000000000088d5] { gipcReceiveRequest : peerName '', data (nil), len 0, olen 0, off 0, parentEndp 0x7fa4a8051f00, ret gipcretConnectionLost (12), objFlags 0x0, reqFlags 0x2 }
2022-03-01 13:09:01.375 :GIPCGMOD:140347051505408: [ INFO] gipcmodGipcCompleteRecv: [gipc] Completed recv for req 0x7fa4d80e5bf0 [00000000000088d5] { gipcReceiveRequest : peerName '', data (nil), len 0, olen 0, off 0, parentEndp 0x7fa4a8051f00, ret gipcretConnectionLost (12), objFlags 0x0, reqFlags 0x2 }
2022-03-01 13:09:01.375 : GIPCTLS:140347051505408: [ INFO] gipcmodTlsDisconnect: [tls] disconnect issued on endp 0x7fa4a8051f00 [0000000000008801] { gipcEndpoint : localAddr 'gipcha://racnode2:360b-2c8a-112c-67e0', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/815c-7b12-d9d8-945b', numPend 1, numReady 0, numDone 2, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x55f289109900, ready 1, wobj 0x7fa4a8042bc0, sendp (nil) status 0flags 0x20038606, flags-2 0x50, usrFlags 0x0 }
2022-03-01 13:09:01.375 :GIPCGMOD:140347051505408: [ INFO] gipcmodGipcDisconnect: [gipc] Issued endpoint close for endp 0x7fa4a8051f00 [0000000000008801] { gipcEndpoint : localAddr 'gipcha://racnode2:360b-2c8a-112c-67e0', remoteAddr 'gipcha://racnode1:nm2_racnode1-c/815c-7b12-d9d8-945b', numPend 1, numReady 0, numDone 2, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x55f289109900, ready 1, wobj 0x7fa4a8042bc0, sendp (nil) status 0flags 0x20038606, flags-2 0x50, usrFlags 0x0 }
The two nodes can talk to each other. So to the connection manager and storage container.
@psaini79 Please have a look and let me know if we need more info. Thanks!
@ifrankrui
The configuration seems to be correct. please share the following:
From Docker Host:
systemctl status firewalld
getenforce
From the Both the containers
cat /etc/hosts
nslookup racnode1
nslookup racnode2
nslookup <vips>
nslookup <scan>
ping <vips>
ping <scan>
cat /etc/resolv.conf
/bin/netstat -in # <<Check the mTU size on eth0>>
# racnode1
ping -s <MTU> -c 2 -I 192.168.17.150 192.168.17.151
# racnode2
ping -s <MTU> -c 2 -I 192.168.17.151 192.168.17.150
Also, can you please share the VM detail? Have you deployed in a cloud? If yes, which cloud?
Thanks @psaini79
I stopped Linux firewall before running the container. Please see the following.
From the docker host:
[root@vm-oracle ansible]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)
[root@vm-oracle ansible]# getenforce
Permissive
From node1:
[grid@racnode1 ~]$ cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
172.16.1.150 racnode1.example.com racnode1
192.168.17.150 racnode1-priv.example.com racnode1-priv
172.16.1.160 racnode1-vip.example.com racnode1-vip
172.16.1.15 racnode-cman1.example.com racnode-cman1
172.16.1.151 racnode2.example.com racnode2
192.168.17.151 racnode2-priv.example.com racnode2-priv
172.16.1.161 racnode2-vip.example.com racnode2-vip
172.16.1.70 racnode-scan.example.com racnode-scan
[grid@racnode1 ~]$ nslookup racnode1
Server: 172.16.1.25
Address: 172.16.1.25#53
Name: racnode1.example.com
Address: 172.16.1.150
[grid@racnode1 ~]$ nslookup racnode2
Server: 172.16.1.25
Address: 172.16.1.25#53
Name: racnode2.example.com
Address: 172.16.1.151
[grid@racnode1 ~]$ nslookup 172.16.1.160
160.1.16.172.in-addr.arpa name = racnode1-vip.example.com.
[grid@racnode1 ~]$ nslookup racnode-cman1
Server: 172.16.1.25
Address: 172.16.1.25#53
Name: racnode-cman1.example.com
Address: 172.16.1.2
[grid@racnode1 ~]$ nslookup 172.16.1.161
161.1.16.172.in-addr.arpa name = racnode2-vip.example.com.
[grid@racnode1 ~]$ nslookup 172.16.1.70
70.1.16.172.in-addr.arpa name = racnode1-scan.example.com.
[grid@racnode1 ~]$ nslookup racnode-scan.example.com
Server: 172.16.1.25
Address: 172.16.1.25#53
Name: racnode-scan.example.com
Address: 172.16.1.172
Name: racnode-scan.example.com
Address: 172.16.1.170
Name: racnode-scan.example.com
Address: 172.16.1.171
[grid@racnode1 ~]$ nslookup racnode2-priv.example.com
Server: 172.16.1.25
Address: 172.16.1.25#53
** server can't find racnode2-priv.example.com: NXDOMAIN
[grid@racnode1 ~]$ nslookup racnode1-priv.example.com
Server: 172.16.1.25
Address: 172.16.1.25#53
** server can't find racnode1-priv.example.com: NXDOMAIN
[grid@racnode1 ~]$ ping racnode-scan.example.com
PING racnode-scan.example.com (172.16.1.70) 56(84) bytes of data.
From racnode1.example.com (172.16.1.150) icmp_seq=1 Destination Host Unreachable
From racnode1.example.com (172.16.1.150) icmp_seq=2 Destination Host Unreachable
From racnode1.example.com (172.16.1.150) icmp_seq=3 Destination Host Unreachable
^C
--- racnode-scan.example.com ping statistics ---
5 packets transmitted, 0 received, +3 errors, 100% packet loss, time 4134ms
pipe 4
[grid@racnode1 ~]$ ping 172.16.1.70
PING 172.16.1.70 (172.16.1.70) 56(84) bytes of data.
From 172.16.1.150 icmp_seq=1 Destination Host Unreachable
From 172.16.1.150 icmp_seq=2 Destination Host Unreachable
From 172.16.1.150 icmp_seq=3 Destination Host Unreachable
^C
--- 172.16.1.70 ping statistics ---
5 packets transmitted, 0 received, +3 errors, 100% packet loss, time 4104ms
pipe 4
[grid@racnode1 ~]$ ping racnode2-vip.example.com
PING racnode2-vip.example.com (172.16.1.161) 56(84) bytes of data.
From racnode1.example.com (172.16.1.150) icmp_seq=1 Destination Host Unreachable
From racnode1.example.com (172.16.1.150) icmp_seq=2 Destination Host Unreachable
From racnode1.example.com (172.16.1.150) icmp_seq=3 Destination Host Unreachable
From racnode1.example.com (172.16.1.150) icmp_seq=4 Destination Host Unreachable
From racnode1.example.com (172.16.1.150) icmp_seq=5 Destination Host Unreachable
From racnode1.example.com (172.16.1.150) icmp_seq=6 Destination Host Unreachable
^C
--- racnode2-vip.example.com ping statistics ---
8 packets transmitted, 0 received, +6 errors, 100% packet loss, time 7182ms
pipe 4
[grid@racnode1 ~]$ ping racnode2-priv.example.com
PING racnode2-priv.example.com (192.168.17.151) 56(84) bytes of data.
64 bytes from racnode2-priv.example.com (192.168.17.151): icmp_seq=1 ttl=64 time=0.056 ms
64 bytes from racnode2-priv.example.com (192.168.17.151): icmp_seq=2 ttl=64 time=0.055 ms
64 bytes from racnode2-priv.example.com (192.168.17.151): icmp_seq=3 ttl=64 time=0.056 ms
64 bytes from racnode2-priv.example.com (192.168.17.151): icmp_seq=4 ttl=64 time=0.051 ms
^C
--- racnode2-priv.example.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3068ms
rtt min/avg/max/mdev = 0.051/0.054/0.056/0.007 ms
[grid@racnode1 ~]$ cat /etc/resolv.conf
search example.com
nameserver 172.16.1.25
[grid@racnode1 ~]$ /bin/netstat -in
Kernel Interface table
Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0 1500 5389 0 0 0 5885 0 0 0 BMRU
eth0:1 1500 - no statistics available - BMRU
eth1 1500 16427 0 0 0 18801 0 0 0 BMRU
eth1:1 1500 - no statistics available - BMRU
eth1:2 1500 - no statistics available - BMRU
eth1:3 1500 - no statistics available - BMRU
eth1:4 1500 - no statistics available - BMRU
lo 65536 195060 0 0 0 195060 0 0 0 LRU
[grid@racnode1 ~]$ ping -s 1500 -c 2 -I 192.168.17.150 192.168.17.151
PING 192.168.17.151 (192.168.17.151) from 192.168.17.150 : 1500(1528) bytes of data.
1508 bytes from 192.168.17.151: icmp_seq=1 ttl=64 time=0.093 ms
1508 bytes from 192.168.17.151: icmp_seq=2 ttl=64 time=0.077 ms
--- 192.168.17.151 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1007ms
rtt min/avg/max/mdev = 0.077/0.085/0.093/0.008 ms
From node2:
[grid@racnode2 ~]$ cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
172.16.1.150 racnode1.example.com racnode1
192.168.17.150 racnode1-priv.example.com racnode1-priv
172.16.1.160 racnode1-vip.example.com racnode1-vip
172.16.1.15 racnode-cman1.example.com racnode-cman1
172.16.1.151 racnode2.example.com racnode2
192.168.17.151 racnode2-priv.example.com racnode2-priv
172.16.1.161 racnode2-vip.example.com racnode2-vip
172.16.1.70 racnode-scan.example.com racnode-scan
[grid@racnode2 ~]$ nslookup racnode1
Server: 172.16.1.25
Address: 172.16.1.25#53
Name: racnode1.example.com
Address: 172.16.1.150
[grid@racnode2 ~]$ nslookup racnode2
Server: 172.16.1.25
Address: 172.16.1.25#53
Name: racnode2.example.com
Address: 172.16.1.151
[grid@racnode2 ~]$ nslookup 172.16.1.150
150.1.16.172.in-addr.arpa name = racnode1.example.com.
[grid@racnode2 ~]$ nslookup racnode1.example.com
Server: 172.16.1.25
Address: 172.16.1.25#53
Name: racnode1.example.com
Address: 172.16.1.150
[grid@racnode2 ~]$ nslookup racnode1-priv.example.com
Server: 172.16.1.25
Address: 172.16.1.25#53
** server can't find racnode1-priv.example.com: NXDOMAIN
[grid@racnode2 ~]$ nslookup racnode1-vip.example.com
Server: 172.16.1.25
Address: 172.16.1.25#53
Name: racnode1-vip.example.com
Address: 172.16.1.160
[grid@racnode2 ~]$ nslookup racnode2-vip.example.com
Server: 172.16.1.25
Address: 172.16.1.25#53
Name: racnode2-vip.example.com
Address: 172.16.1.161
[grid@racnode2 ~]$ nslookup racnode-scan.example.com
Server: 172.16.1.25
Address: 172.16.1.25#53
Name: racnode-scan.example.com
Address: 172.16.1.171
Name: racnode-scan.example.com
Address: 172.16.1.172
Name: racnode-scan.example.com
Address: 172.16.1.170
[grid@racnode2 ~]$ ping racnode1-vip.example.com
PING racnode1-vip.example.com (172.16.1.160) 56(84) bytes of data.
64 bytes from racnode1-vip.example.com (172.16.1.160): icmp_seq=1 ttl=64 time=0.069 ms
64 bytes from racnode1-vip.example.com (172.16.1.160): icmp_seq=2 ttl=64 time=0.057 ms
64 bytes from racnode1-vip.example.com (172.16.1.160): icmp_seq=3 ttl=64 time=0.053 ms
^C
--- racnode1-vip.example.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2049ms
rtt min/avg/max/mdev = 0.053/0.059/0.069/0.011 ms
[grid@racnode2 ~]$ ping racnode-scan.example.com
PING racnode-scan.example.com (172.16.1.70) 56(84) bytes of data.
From racnode2.example.com (172.16.1.151) icmp_seq=1 Destination Host Unreachable
From racnode2.example.com (172.16.1.151) icmp_seq=2 Destination Host Unreachable
From racnode2.example.com (172.16.1.151) icmp_seq=3 Destination Host Unreachable
From racnode2.example.com (172.16.1.151) icmp_seq=4 Destination Host Unreachable
From racnode2.example.com (172.16.1.151) icmp_seq=5 Destination Host Unreachable
From racnode2.example.com (172.16.1.151) icmp_seq=6 Destination Host Unreachable
^C
--- racnode-scan.example.com ping statistics ---
9 packets transmitted, 0 received, +6 errors, 100% packet loss, time 8199ms
pipe 4
[grid@racnode2 ~]$ ping 172.16.1.70
PING 172.16.1.70 (172.16.1.70) 56(84) bytes of data.
From 172.16.1.151 icmp_seq=1 Destination Host Unreachable
From 172.16.1.151 icmp_seq=2 Destination Host Unreachable
From 172.16.1.151 icmp_seq=3 Destination Host Unreachable
^C
--- 172.16.1.70 ping statistics ---
5 packets transmitted, 0 received, +3 errors, 100% packet loss, time 4099ms
pipe 4
[grid@racnode2 ~]$ cat /etc/resolv.conf
search example.com
nameserver 172.16.1.25
[grid@racnode2 ~]$ /bin/netstat -in
Kernel Interface table
Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0 1500 5951 0 0 0 5902 0 0 0 BMRU
eth1 1500 18675 0 0 0 16339 0 0 0 BMRU
lo 65536 8767 0 0 0 8767 0 0 0 LRU
[grid@racnode2 ~]$ ping -s 1500 -c 2 -I 192.168.17.151 192.168.17.150
PING 192.168.17.150 (192.168.17.150) from 192.168.17.151 : 1500(1528) bytes of data.
1508 bytes from 192.168.17.150: icmp_seq=1 ttl=64 time=0.090 ms
1508 bytes from 192.168.17.150: icmp_seq=2 ttl=64 time=0.075 ms
--- 192.168.17.150 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1039ms
rtt min/avg/max/mdev = 0.075/0.082/0.090/0.011 ms
Moreover, the docker host is an Azure VM. The spec is Standard D16 v4 (16 vcpus, 64 GiB memory), with 256GB OS disk and 128 GB data disk.
@ifrankrui
I recommend using Oracle RAC on Docker on KVM or virtual box on-prem populated with OEL 7.x and UEK5 for further assistance, because Oracle RAC is only supported in the Oracle Cloud: https://www.oracle.com/technetwork/database/options/clustering/overview/rac-cloud-support-2843861.pdf
For details, please refer following GitHub thread:
https://github.com/oracle/docker-images/issues/1590
If you still have any questions, please let me know I will try to get more details.
Hi @psaini79
I came back to work on this issue. I have also followed the investigation on https://github.com/oracle/docker-images/issues/1590.
If I started node2 before node1. I can see node2 is running, but node1 won't be able to join. So only one node can work in my cluster.
I found the cluster showing the following error from /u01/app/grid/diag/crs/racnode1/crs/trace/ocssd.trc:
2022-03-17 13:49:44.293 : CSSD:3052607232: [ INFO] clssnmeventhndlr: gipcAssociate endp 0x179c1 in container 0x75b type of conn gipcha
2022-03-17 13:49:44.296 : GIPCTLS:3052607232: gipcmodTlsAuthStart: TLS HANDSHAKE - SUCCESSFUL
2022-03-17 13:49:44.296 : GIPCTLS:3052607232: gipcmodTlsAuthStart: Peer is anonymous
2022-03-17 13:49:44.296 : GIPCTLS:3052607232: gipcmodTlsAuthStart: endpoint 0x7fa68c06ebc0 [00000000000179c1] { gipcEndpoint : localAddr 'gipcha://racnode2:nm2_racnode1-c/79a6-41c8-3402-4cfb', remoteAddr 'gipcha://racnode1:f103-d105-7a95-cfe2', numPend 2, numReady 0, numDone 1, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x561d604e7e90, ready 1, wobj 0x7fa68c0b7ec0, sendp (nil) status 0flags 0x20138606, flags-2 0x10, usrFlags 0x0 }, auth state: gipcmodTlsAuthStateReady (3)
2022-03-17 13:49:44.296 : GIPCTLS:3052607232: gipcmodTlsAuthReady: TLS Auth completed Successfully
2022-03-17 13:49:44.297 : CSSD:3052607232: [ ERROR] clssnmConnComplete: Rejecting connection from node 1 as MultiNode RAC is not supported in this Configuration
2022-03-17 13:49:44.297 : GIPCTLS:3052607232: gipcmodTlsDisconnect: [tls] disconnect issued on endp 0x7fa68c06ebc0 [00000000000179c1] { gipcEndpoint : localAddr 'gipcha://racnode2:nm2_racnode1-c/79a6-41c8-3402-4cfb', remoteAddr 'gipcha://racnode1:f103-d105-7a95-cfe2', numPend 1, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 22360, readyRef 0x561d604c7b40, ready 0, wobj 0x7fa68c0b7ec0, sendp (nil) status 0flags 0x26138606, flags-2 0x50, usrFlags 0x0 }
2022-03-17 13:49:44.297 :GIPCGMOD:3052607232: gipcmodGipcDisconnect: [gipc] Issued endpoint close for endp 0x7fa68c06ebc0 [00000000000179c1] { gipcEndpoint : localAddr 'gipcha://racnode2:nm2_racnode1-c/79a6-41c8-3402-4cfb', remoteAddr 'gipcha://racnode1:f103-d105-7a95-cfe2', numPend 1, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 22360, readyRef 0x561d604c7b40, ready 0, wobj 0x7fa68c0b7ec0, sendp (nil) status 0flags 0x26138606, flags-2 0x50, usrFlags 0x0 }
I saw the above error when I join node2 to the cluster too. Do you have any idea about the above error?
** The same error was discussed in https://balazspapp.wordpress.com/2018/12/09/you-may-not-run-multinode-rac-because-it-is-not-supported-or-certified/. I tried the workaround to block 169.254.169.254 on all the nodes. But it doesn't work.
@ifrankrui
Please share the details about the env? Are you trying this on-premise or on cloud?
Hi, I am following the steps in the repo to build a cluster with docker images. I have the first node up running and able to connect to the database. But when I add the second node to the cluster I came across the error. Here is the output and other logs:
Checking crsctl:
Here is some more information from the trace:
It looks like the racnode2 can't communicate with racnode1. Do I need to specify the connection manager when creating racnode2 container?
Many thanks!