oracle / docker-images

Official source of container configurations, images, and examples for Oracle products and projects
https://developer.oracle.com/use-cases/#containers
Universal Permissive License v1.0
6.53k stars 5.42k forks source link

Unable to create DB on 12.2.0.1 RAC container #1339

Closed babloo2642 closed 5 years ago

babloo2642 commented 5 years ago

Verifying Time zone consistency ...PASSED

Verifying VIP Subnet configuration check ...PASSED

Verifying resolv.conf Integrity ...

Verifying (Linux) resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048)

Verifying resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048)

Verifying DNS/NIS name service ...

Verifying Name Service Switch Configuration File Integrity ...PASSED

Verifying DNS/NIS name service ...FAILED (PRVG-1101)

Verifying Single Client Access Name (SCAN) ...PASSED

Verifying Domain Sockets ...PASSED

Verifying /boot mount ...PASSED

Verifying Daemon "avahi-daemon" not configured and running ...PASSED

Verifying Daemon "proxyt" not configured and running ...PASSED

Verifying loopback network interface address ...PASSED

Verifying Oracle base: /export/app/grid ...

Verifying '/export/app/grid' ...PASSED

Verifying Oracle base: /export/app/grid ...PASSED

Verifying User Equivalence ...PASSED

Verifying Network interface bonding status of private interconnect network interfaces ...PASSED

Verifying File system mount options for path /var ...PASSED

Verifying zeroconf check ...PASSED

Verifying ASM Filter Driver configuration ...PASSED

Pre-check for cluster services setup was unsuccessful on all the nodes.

Warnings were encountered during execution of CVU verification request "stage -pre crsinst".

Verifying Device Checks for ASM ...WARNING

Verifying ASM device sharedness check ...WARNING

Verifying Shared Storage Accessibility:/dev/asm_disk2,/dev/asm_disk1

...WARNING

PRVG-1615 : Virtual environment detected. Skipping shared storage check for

disks "/dev/asm_disk2,/dev/asm_disk1".

Verifying Network Time Protocol (NTP) ...FAILED

Verifying resolv.conf Integrity ...FAILED

racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded

      "15000" ms on following nodes: racnode1

racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the

      specified type by name servers o"127.0.0.11".

racnode1: Check for integrity of file "/etc/resolv.conf" failed

Verifying (Linux) resolv.conf Integrity ...FAILED

racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded

        "15000" ms on following nodes: racnode1

racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the

        specified type by name servers o"127.0.0.11".

Verifying DNS/NIS name service ...FAILED

PRVG-1101 : SCAN name "racnode-scan" failed to resolve

CVU operation performed: stage -pre crsinst

Date: Jul 19, 2019 2:17:31 AM

CVU home: /export/app/12.2.0/grid/

User: grid

07-19-2019 02:18:39 UTC : : CVU Checks are ignored as IGNORE_CVU_CHECKS set to true. It is recommended to set IGNORE_CVU_CHECKS to false and meet all the cvu checks requirement. RAC installation might fail, if there are failed cvu checks.

07-19-2019 02:18:39 UTC : : Running Grid Installation

07-19-2019 02:18:56 UTC : : Running root.sh

07-19-2019 02:18:56 UTC : : Nodes in the cluster racnode1

07-19-2019 02:18:56 UTC : : Running root.sh on racnode1

07-19-2019 02:18:57 UTC : : Running post root.sh steps

07-19-2019 02:18:57 UTC : : Running post root.sh steps to setup Grid env

07-19-2019 02:19:03 UTC : : Checking Cluster Status

07-19-2019 02:19:03 UTC : : Nodes in the cluster

07-19-2019 02:19:03 UTC : : Removing /tmp/cluvfy_check.txt as cluster check has passed

07-19-2019 02:19:03 UTC : : Generating DB Responsefile Running DB creation

07-19-2019 02:19:03 UTC : : Running DB creation

07-19-2019 02:19:14 UTC : : Checking DB status

07-19-2019 02:19:15 UTC : : ORCLCDB is not up and running on racnode1

07-19-2019 02:19:15 UTC : : Error has occurred in Grid Setup, Please verify!

babloo2642 commented 5 years ago

Starting Update UTMP about System Runlevel Changes... 07-29-2019 11:20:12 UTC : : Disable failed service var-lib-nfs-rpc_pipefs.mount Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory 07-29-2019 11:20:12 UTC : : Resetting Failed Services

babloo2642 commented 5 years ago

[grid@racnode1 tmp]$ cat db_status.txt ERROR: ORA-12162: TNS:net service name is incorrectly specified SP2-0306: Invalid option. Usage: CONN[ECT] [{logon|/|proxy} [AS {SYSDBA|SYSOPER|SYSASM|SYSBACKUP|SYSDG|SYSKM|SYSRAC}] [edition=value]] where ::= username>[/<password>][@<connect_identifier] ::= proxyuser>[<username>][/<password>][@<connect_identifier] SP2-0306: Invalid option. Usage: CONN[ECT] [{logon|/|proxy} [AS {SYSDBA|SYSOPER|SYSASM|SYSBACKUP|SYSDG|SYSKM|SYSRAC}] [edition=value]] where ::= username>[/<password>][@<connect_identifier] ::= proxyuser>[<username>][/<password>][@<connect_identifier] SP2-0157: unable to CONNECT to ORACLE after 3 attempts, exiting SQL*Plus

psaini79 commented 5 years ago

Ok. I am looking at it and will come back to you asap.

babloo2642 commented 5 years ago

Hi, Please let me know if you want me to share more details.

psaini79 commented 5 years ago

Hi,

Thanks for uploading the logs and it seems Grid installation failed. Please share following:

Execute following on Docker host:

uname -r
cat /etc/oracle-release
docker info
systemctl status docker
docker images 
docker ps -a

Execute on container - racnode1

Also, please upload Grid logs:

docker exec -i -t racnode1 /bin/bash
cd /u01/app/grid/diag/crs/racnode1/crs
tar -cvzf trace.tgz trace
babloo2642 commented 5 years ago

I don't see any logs in trace directory. Just an empty directory "trace"

babloo2642 commented 5 years ago

uname -r

4.1.12-124.25.1.el7uek.x86_64

cat /etc/oracle-release

Oracle Linux Server release 7.6

docker info

Containers: 2 Running: 2 Paused: 0 Stopped: 0 Images: 25 Server Version: 18.09.7 Storage Driver: overlay2 Backing Filesystem: xfs Supports d_type: true Native Overlay Diff: false Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84 runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30 init version: fec3683 Security Options: seccomp Profile: default Kernel Version: 4.1.12-124.25.1.el7uek.x86_64 Operating System: Oracle Linux Server 7.6 OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 31.17GiB Name: docker.example.com ID: BRCU:YEO2:V4MX:AQNK:HIQ5:JWNW:UNZR:4FDH:ZS66:Z74Y:IBZV:BPDU Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false Product License: Community Engine

WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled

systemctl status docker

● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2019-07-17 18:47:15 PDT; 2 weeks 3 days ago Docs: https://docs.docker.com Main PID: 2762 (dockerd) CGroup: /system.slice/docker.service └─2762 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Aug 03 18:30:20 docker.example.com systemd[1]: [/usr/lib/systemd/system/docker.serv...' Aug 03 18:40:21 docker.example.com systemd[1]: [/usr/lib/systemd/system/docker.serv...' Aug 03 18:50:20 docker.example.com systemd[1]: [/usr/lib/systemd/system/docker.serv...' Aug 03 19:00:26 docker.example.comsystemd[1]: [/usr/lib/systemd/system/docker.serv...' Aug 03 19:10:20 docker.example.com systemd[1]: [/usr/lib/systemd/system/docker.serv...' Aug 03 19:20:20 docker.example.com systemd[1]: [/usr/lib/systemd/system/docker.serv...' Aug 03 19:30:24 docker.example.com systemd[1]: [/usr/lib/systemd/system/docker.serv...' Aug 03 19:40:21 docker.example.com systemd[1]: [/usr/lib/systemd/system/docker.serv...' Aug 03 19:50:20 docker.example.com systemd[1]: [/usr/lib/systemd/system/docker.serv...' Aug 03 20:00:21 docker.example.com systemd[1]: [/usr/lib/systemd/system/docker.serv...' Hint: Some lines were ellipsized, use -l to show in full.

docker images

REPOSITORY TAG IMAGE ID CREATED SIZE oracle/database-rac 12.2.0.1 25be92377bca 6 days ago 24.7GB oracle/client-cman 12.2.0.1 0cf079c49ea9 2 weeks ago 4.58GB oraclelinux 7-slim d94f4e9e5c13 5 weeks ago 118MB

docker ps -a

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3f91828a80f9 oracle/database-rac:12.2.0.1 "/usr/sbin/oracleinit" 5 days ago Up 5 days racnode1 3417d5d5c975 oracle/client-cman:12.2.0.1 "/bin/sh -c 'exec $S…" 5 days ago Up 5 days racnode-cman

babloo2642 commented 5 years ago

Please let me know if you want me to share more details. Thank you.

babloo2642 commented 5 years ago

Hi, Do you have any update on this?

psaini79 commented 5 years ago

@babloo2642 , I looked at the configuration and how did you install docker engine? Did you use oracle yum repository for the installation? In general, server version field in the output of docker info must show version in following format:

Server Version: -ol

I would request you to look at following white paper for detailed steps. https://www.oracle.com/technetwork/database/options/clustering/rac-ondocker-bp-wp-5458685.pdf

Please execute following steps and upload the files: docker exec -i -t racnode1 /bin/bash ls -ltr /u01/app/grid/crsdata/racnode1/crsconfig/rootcrs You should see rootcrs file inside /u01/app/grid/crsdata/racnode1/crsconfig

if you see files are there then zip the grid dir and upload cd /u01/app

cd /u01/app/19.3.0/grid

zip cfgtoollogs dir and upload.

Also, do you have Oracle MOS (support portal) access? If yes, can you file a SR and let me know the details.

babloo2642 commented 5 years ago

Hi , Yes, I have used oracle yum repository to install docker engine. I don't see any rootcrs* file in the path you have mentioned.

[grid@racnode1 grid]$ cd /export/app/grid/crsdata/racnode1/ [grid@racnode1 racnode1]$ ls cvu [grid@racnode1 racnode1]$ cd cvu [grid@racnode1 cvu]$ ls cvutrace.log.0

Please find the cfgtoollogs below cfgtoolslogs.tar.gz.zip

And also I have filed a SR.

Thank you.

psaini79 commented 5 years ago

@babloo2642 Please ping me the SR and let me contact SR owner. I might have zoom session with you. Also, once the issue is resolved we can paste the solution here.

babloo2642 commented 5 years ago

@psaini79

Sounds good. Please find the SR: "SR 3-20693662131". I'm comfortable to join the zoom from 7:00 - 10:00 AM IST any day. Thank you.

babloo2642 commented 5 years ago

@psaini79

Do you have any update on this? Please let me know if you have any. Thank you.

babloo2642 commented 5 years ago

@psaini79

Did you get a chance to contact SR: "SR 3-20693662131" owner?

psaini79 commented 5 years ago

@babloo2642

Please execute following steps and paste the output: docker exec -i -t racnode1 /bin/bash sudo /bin/bash sh -x /opt/scripts/startup/runOracle.sh

Paste the screen output and attach file /tmp/orod.log.

babloo2642 commented 5 years ago

@psaini79

Please find the details below.

[grid@racnode1 ~]$ sudo /bin/bash bash-4.2# sh -x /opt/scripts/startup/runOracle.sh

--- racnode1.example.com ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 3999ms rtt min/avg/max/mdev = 0.027/0.034/0.038/0.008 ms Remote host reachability check succeeded. The following hosts are reachable: racnode1. The following hosts are not reachable: . All hosts are reachable. Proceeding further... firsthost racnode1 numhosts 1 The script will setup SSH connectivity from the host racnode1 to all the remote hosts. After the script is executed, the user can use SSH to run commands on the remote hosts or copy files between this host racnode1 and the remote hosts without being prompted for passwords or confirmations.

NOTE 1: As part of the setup procedure, this script will use ssh and scp to copy files between the local host and the remote hosts. Since the script does not store passwords, you may be prompted for the passwords during the execution of the script whenever ssh or scp is invoked.

NOTE 2: AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEGES TO THESE directories.

Do you want to continue and let the script make the above mentioned changes (yes/no)? Confirmation provided on the command line

The user chose yes User chose to skip passphrase related questions. Creating .ssh directory on local host, if not present already Creating authorized_keys file on local host Changing permissions on authorized_keys to 644 on local host Creating known_hosts file on local host Changing permissions on known_hosts to 644 on local host Creating config file on local host If a config file exists already at /home/grid/.ssh/config, it would be backed up to /home/grid/.ssh/config.backup. Creating .ssh directory and setting permissions on remote host racnode1 THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR grid. THIS IS AN SSH REQUIREMENT. The script would create ~grid/.ssh/config file on remote host racnode1. If a config file exists already at ~grid/.ssh/config, it would be backed up to ~grid/.ssh/config.backup. The user may be prompted for a password here since the script would be running SSH on host racnode1. Warning: Permanently added 'racnode1,172.16.1.150' (ECDSA) to the list of known hosts. grid@racnode1's password: Done with creating .ssh directory and setting permissions on remote host racnode1. Copying local host public key to the remote host racnode1 The user may be prompted for a password or passphrase here since the script would be using SCP for host racnode1. grid@racnode1's password: Done copying local host public key to the remote host racnode1 Creating keys on remote host racnode1 if they do not exist already. This is required to setup SSH on host racnode1.

Updating authorized_keys file on remote host racnode1 Updating known_hosts file on remote host racnode1 cat: /home/grid/.ssh/known_hosts.tmp: No such file or directory cat: /home/grid/.ssh/authorized_keys.tmp: No such file or directory SSH setup is complete.


Verifying SSH setup

The script will now run the date command on the remote nodes using ssh to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP, THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR PASSWORDS. If you see any output other than date or are prompted for the password, ssh is not setup correctly and you will need to resolve the issue and set up ssh again. The possible causes for failure could be:

  1. The server settings in /etc/ssh/sshd_config file do not allow ssh for user grid.
  2. The server may have disabled public key based authentication.
  3. The client public key on the server may be outdated.
  4. ~grid or ~grid/.ssh on the remote host may not be owned by grid.
  5. User may not have passed -shared option for shared remote users or may be passing the -shared option for non-shared remote users.
  6. If there is output in addition to the date, but no password is asked, it may be a security alert shown as part of company policy. Append the additional text to the /sysman/prov/resources/ignoreMessages.txt file.

    --racnode1:-- Running /usr/bin/ssh -x -l grid racnode1 date to verify SSH connectivity has been setup from local host to racnode1. IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR. Wed Aug 14 19:18:18 UTC 2019


    Verifying SSH connectivity has been setup from racnode1 to racnode1

    IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Wed Aug 14 19:18:18 UTC 2019

    -Verification from racnode1 complete- SSH verification complete. spawn /export/app/oracle/product/12.2.0/dbhome_1/oui/prov/resources/scripts/sshUserSetup.sh -user oracle -hosts racnode1 -logfile /tmp/oracle_SetupSSH.log -advanced -exverify -noPromptPassphrase -confirm The output of this script is also logged into /tmp/oracle_SetupSSH.log Hosts are racnode1 user is oracle Platform:- Linux Checking if the remote hosts are reachable PING racnode1.example.com (172.16.1.150) 56(84) bytes of data. 64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=1 ttl=64 time=0.026 ms 64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=2 ttl=64 time=0.040 ms 64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=3 ttl=64 time=0.038 ms 64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=4 ttl=64 time=0.038 ms 64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=5 ttl=64 time=0.037 ms

--- racnode1.example.com ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 3999ms rtt min/avg/max/mdev = 0.026/0.035/0.040/0.009 ms Remote host reachability check succeeded. The following hosts are reachable: racnode1. The following hosts are not reachable: . All hosts are reachable. Proceeding further... firsthost racnode1 numhosts 1 The script will setup SSH connectivity from the host racnode1 to all the remote hosts. After the script is executed, the user can use SSH to run commands on the remote hosts or copy files between this host racnode1 and the remote hosts without being prompted for passwords or confirmations.

NOTE 1: As part of the setup procedure, this script will use ssh and scp to copy files between the local host and the remote hosts. Since the script does not store passwords, you may be prompted for the passwords during the execution of the script whenever ssh or scp is invoked.

NOTE 2: AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEGES TO THESE directories.

Do you want to continue and let the script make the above mentioned changes (yes/no)? Confirmation provided on the command line

The user chose yes User chose to skip passphrase related questions. Creating .ssh directory on local host, if not present already Creating authorized_keys file on local host Changing permissions on authorized_keys to 644 on local host Creating known_hosts file on local host Changing permissions on known_hosts to 644 on local host Creating config file on local host If a config file exists already at /home/oracle/.ssh/config, it would be backed up to /home/oracle/.ssh/config.backup. Creating .ssh directory and setting permissions on remote host racnode1 THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR oracle. THIS IS AN SSH REQUIREMENT. The script would create ~oracle/.ssh/config file on remote host racnode1. If a config file exists already at ~oracle/.ssh/config, it would be backed up to ~oracle/.ssh/config.backup. The user may be prompted for a password here since the script would be running SSH on host racnode1. Warning: Permanently added 'racnode1,172.16.1.150' (ECDSA) to the list of known hosts. oracle@racnode1's password: Done with creating .ssh directory and setting permissions on remote host racnode1. Copying local host public key to the remote host racnode1 The user may be prompted for a password or passphrase here since the script would be using SCP for host racnode1. oracle@racnode1's password: Done copying local host public key to the remote host racnode1 Creating keys on remote host racnode1 if they do not exist already. This is required to setup SSH on host racnode1.

Updating authorized_keys file on remote host racnode1 Updating known_hosts file on remote host racnode1 cat: /home/oracle/.ssh/known_hosts.tmp: No such file or directory cat: /home/oracle/.ssh/authorized_keys.tmp: No such file or directory SSH setup is complete.


Verifying SSH setup

The script will now run the date command on the remote nodes using ssh to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP, THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR PASSWORDS. If you see any output other than date or are prompted for the password, ssh is not setup correctly and you will need to resolve the issue and set up ssh again. The possible causes for failure could be:

  1. The server settings in /etc/ssh/sshd_config file do not allow ssh for user oracle.
  2. The server may have disabled public key based authentication.
  3. The client public key on the server may be outdated.
  4. ~oracle or ~oracle/.ssh on the remote host may not be owned by oracle.
  5. User may not have passed -shared option for shared remote users or may be passing the -shared option for non-shared remote users.
  6. If there is output in addition to the date, but no password is asked, it may be a security alert shown as part of company policy. Append the additional text to the /sysman/prov/resources/ignoreMessages.txt file.

    --racnode1:-- Running /usr/bin/ssh -x -l oracle racnode1 date to verify SSH connectivity has been setup from local host to racnode1. IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR. Wed Aug 14 19:18:53 UTC 2019


    Verifying SSH connectivity has been setup from racnode1 to racnode1

    IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Wed Aug 14 19:18:53 UTC 2019

    -Verification from racnode1 complete- SSH verification complete. su - $GRID_USER -c "ssh -o BatchMode=yes -o ConnectTimeout=5 $GRID_USER@$node echo ok 2>&1" su - $ORACLE_USER -c "ssh -o BatchMode=yes -o ConnectTimeout=5 $ORACLE_USER@$node echo ok 2>&1" -bash: /etc/rac_env_vars: Permission denied -bash: /etc/rac_env_vars: Permission denied

Verifying Physical Memory ...PASSED Verifying Available Physical Memory ...PASSED Verifying Swap Size ...PASSED Verifying Free Space: racnode1:/usr,racnode1:/var,racnode1:/etc,racnode1:/sbin,racnode1:/tmp,racnode1:/export/app/grid ...PASSED Verifying User Existence: grid ... Verifying Users With Same UID: 54332 ...PASSED Verifying User Existence: grid ...PASSED Verifying Group Existence: asmadmin ...PASSED Verifying Group Existence: dba ...PASSED Verifying Group Existence: oinstall ...PASSED Verifying Group Membership: dba ...PASSED Verifying Group Membership: asmadmin ...PASSED Verifying Group Membership: oinstall(Primary) ...PASSED Verifying Run Level ...PASSED Verifying Hard Limit: maximum open file descriptors ...PASSED Verifying Soft Limit: maximum open file descriptors ...PASSED Verifying Hard Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum stack size ...PASSED Verifying Architecture ...PASSED Verifying OS Kernel Version ...PASSED Verifying OS Kernel Parameter: semmsl ...PASSED Verifying OS Kernel Parameter: semmns ...PASSED Verifying OS Kernel Parameter: semopm ...PASSED Verifying OS Kernel Parameter: semmni ...PASSED Verifying OS Kernel Parameter: shmmax ...PASSED Verifying OS Kernel Parameter: shmmni ...PASSED Verifying OS Kernel Parameter: shmall ...PASSED Verifying OS Kernel Parameter: file-max ...PASSED Verifying OS Kernel Parameter: aio-max-nr ...PASSED Verifying OS Kernel Parameter: panic_on_oops ...PASSED Verifying Package: binutils-2.23.52.0.1 ...PASSED Verifying Package: compat-libcap1-1.10 ...PASSED Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED Verifying Package: sysstat-10.1.5 ...PASSED Verifying Package: ksh ...PASSED Verifying Package: make-3.82 ...PASSED Verifying Package: glibc-2.17 (x86_64) ...PASSED Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED Verifying Package: libaio-0.3.109 (x86_64) ...PASSED Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED Verifying Package: nfs-utils-1.2.3-15 ...PASSED Verifying Package: smartmontools-6.2-4 ...PASSED Verifying Package: net-tools-2.0-0.17 ...PASSED Verifying Port Availability for component "Oracle Remote Method Invocation (ORMI)" ...PASSED Verifying Port Availability for component "Oracle Notification Service (ONS)" ...PASSED Verifying Port Availability for component "Oracle Cluster Synchronization Services (CSSD)" ...PASSED Verifying Port Availability for component "Oracle Notification Service (ONS) Enterprise Manager support" ...PASSED Verifying Port Availability for component "Oracle Database Listener" ...PASSED Verifying Users With Same UID: 0 ...PASSED Verifying Current Group ID ...PASSED Verifying Root user consistency ...PASSED Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying Node Connectivity ...PASSED Verifying Multicast check ...PASSED Verifying ASM Integrity ... Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying Node Connectivity ...PASSED Verifying ASM Integrity ...PASSED Verifying Device Checks for ASM ... Verifying ASM device sharedness check ... Verifying Package: cvuqdisk-1.0.10-1 ...PASSED Verifying Shared Storage Accessibility:/dev/asm_disk2,/dev/asm_disk1 ...WARNING (PRVG-1615) Verifying ASM device sharedness check ...WARNING (PRVG-1615) Verifying Access Control List check ...PASSED Verifying Device Checks for ASM ...WARNING (PRVG-1615) Verifying I/O scheduler ... Verifying Package: cvuqdisk-1.0.10-1 ...PASSED Verifying I/O scheduler ...PASSED Verifying Network Time Protocol (NTP) ... Verifying '/etc/ntp.conf' ...PASSED Verifying '/var/run/ntpd.pid' ...PASSED Verifying '/var/run/chronyd.pid' ...PASSED Verifying Network Time Protocol (NTP) ...FAILED Verifying Same core file name pattern ...PASSED Verifying User Mask ...PASSED Verifying User Not In Group "root": grid ...PASSED Verifying Time zone consistency ...PASSED Verifying VIP Subnet configuration check ...PASSED Verifying resolv.conf Integrity ... Verifying (Linux) resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying DNS/NIS name service ... Verifying Name Service Switch Configuration File Integrity ...PASSED Verifying DNS/NIS name service ...FAILED (PRVG-1101) Verifying Single Client Access Name (SCAN) ...PASSED Verifying Domain Sockets ...PASSED Verifying /boot mount ...PASSED Verifying Daemon "avahi-daemon" not configured and running ...PASSED Verifying Daemon "proxyt" not configured and running ...PASSED Verifying loopback network interface address ...PASSED Verifying Oracle base: /export/app/grid ... Verifying '/export/app/grid' ...PASSED Verifying Oracle base: /export/app/grid ...PASSED Verifying User Equivalence ...PASSED Verifying Network interface bonding status of private interconnect network interfaces ...PASSED Verifying File system mount options for path /var ...PASSED Verifying zeroconf check ...PASSED Verifying ASM Filter Driver configuration ...PASSED

Pre-check for cluster services setup was unsuccessful on all the nodes.

Warnings were encountered during execution of CVU verification request "stage -pre crsinst".

Verifying Device Checks for ASM ...WARNING Verifying ASM device sharedness check ...WARNING Verifying Shared Storage Accessibility:/dev/asm_disk2,/dev/asm_disk1 ...WARNING PRVG-1615 : Virtual environment detected. Skipping shared storage check for disks "/dev/asm_disk2,/dev/asm_disk1".

Verifying Network Time Protocol (NTP) ...FAILED Verifying resolv.conf Integrity ...FAILED racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11". racnode1: Check for integrity of file "/etc/resolv.conf" failed

Verifying (Linux) resolv.conf Integrity ...FAILED racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11".

Verifying DNS/NIS name service ...FAILED PRVG-1101 : SCAN name "racnode-scan" failed to resolve

CVU operation performed: stage -pre crsinst Date: Aug 14, 2019 7:19:02 PM CVU home: /export/app/12.2.0/grid/ User: grid Launching Oracle Grid Infrastructure Setup Wizard...

[FATAL] [INS-30004] The SYS password entered is invalid. CAUSE: Passwords may contain only alphanumeric characters from the chosen database character set, underscore (), dollar sign ($), or pound sign (#). ACTION: Provide a password as per recommendations. [FATAL] [INS-30004] The ASMSNMP password entered is invalid. CAUSE: Passwords may contain only alphanumeric characters from the chosen database character set, underscore (), dollar sign ($), or pound sign (#). ACTION: Provide a password as per recommendations. Check /export/app/12.2.0/grid/install/root_racnode1_2019-08-14_19-20-24-457322990.log for the output of root script Launching Oracle Grid Infrastructure Setup Wizard...

The configuration that needs to be performed as privileged user is not completed. The configuration tools can only be executed after that. You can find the logs of this session at: /export/app/oraInventory/logs/GridSetupActions2019-08-14_07-20-24PM

As a root user, execute the following script(s):

  1. /export/app/12.2.0/grid/root.sh

Execute /export/app/12.2.0/grid/root.sh on the following nodes: [racnode1]

After the successful root script execution, proceed to re-run the same 'gridSetup.sh -executeConfigTools' command.

Successfully Configured Software. [FATAL] [DBT-10602] (Oracle Real Application Cluster (RAC) database) database cannot be created in this system. CAUSE: Oracle Grid Infrastructure is not configured on the system. ACTION: Configure Oracle Grid Infrastructure prior to creation of (Oracle Real Application Cluster (RAC) database). Refer to Oracle Grid Infrastructure Installation Guide for installation and configuration steps. Checking on racnode1

Please find the file /tmp/orod.log orod.log.zip

psaini79 commented 5 years ago

@babloo2642 Grid software configuration failed . Please provide following: Zip the dir /export/app/oraInventory/logs/GridSetupActions2019-08-14_07-20-24PM and upload it.

Please paste the output of following command: docker exec -i -t racnode1 sudo /bin/bash cat /etc/hosts

ping racnode1 ping racnode1-vip
ping racnode-scan

ifconfig -a

psaini79 commented 5 years ago

What password you have used for testing purpose? If you cannot paste it here because of security, please make sure you follow the standard as per the Oracle Guide lines.

babloo2642 commented 5 years ago

@psaini79

Please find the details below.

[grid@racnode1 ~]$ cat /etc/hosts 127.0.0.1 localhost.localdomain localhost

172.16.1.150 racnode1.example.com racnode1

192.168.17.150 racnode1-priv.example.com racnode1-priv

172.16.1.160 racnode1-vip.example.com racnode1-vip

172.16.1.70 racnode-scan.example.com racnode-scan

172.16.1.15 racnode-cman1.example.com racnode-cman1 [grid@racnode1 ~]$ ping racnode1 PING racnode1.example.com (172.16.1.150) 56(84) bytes of data. 64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=1 ttl=64 time=0.031 ms 64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=2 ttl=64 time=0.038 ms 64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=3 ttl=64 time=0.051 ms 64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=4 ttl=64 time=0.036 ms 64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=5 ttl=64 time=0.036 ms ^C --- racnode1.example.com ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 3999ms rtt min/avg/max/mdev = 0.031/0.038/0.051/0.008 ms [grid@racnode1 ~]$ ping racnode1-vip PING racnode1-vip.example.com (172.16.1.160) 56(84) bytes of data. From racnode1.example.com (172.16.1.150) icmp_seq=1 Destination Host Unreachable From racnode1.example.com (172.16.1.150) icmp_seq=2 Destination Host Unreachable From racnode1.example.com (172.16.1.150) icmp_seq=3 Destination Host Unreachable From racnode1.example.com (172.16.1.150) icmp_seq=4 Destination Host Unreachable From racnode1.example.com (172.16.1.150) icmp_seq=5 Destination Host Unreachable From racnode1.example.com (172.16.1.150) icmp_seq=6 Destination Host Unreachable From racnode1.example.com (172.16.1.150) icmp_seq=7 Destination Host Unreachable From racnode1.example.com (172.16.1.150) icmp_seq=8 Destination Host Unreachable ^C --- racnode1-vip.example.com ping statistics --- 8 packets transmitted, 0 received, +8 errors, 100% packet loss, time 7001ms pipe 4 [grid@racnode1 ~]$ ping racnode-scan PING racnode-scan.example.com (172.16.1.70) 56(84) bytes of data. From racnode1.example.com (172.16.1.150) icmp_seq=1 Destination Host Unreachable From racnode1.example.com (172.16.1.150) icmp_seq=2 Destination Host Unreachable From racnode1.example.com (172.16.1.150) icmp_seq=3 Destination Host Unreachable From racnode1.example.com (172.16.1.150) icmp_seq=4 Destination Host Unreachable From racnode1.example.com (172.16.1.150) icmp_seq=5 Destination Host Unreachable From racnode1.example.com (172.16.1.150) icmp_seq=6 Destination Host Unreachable From racnode1.example.com (172.16.1.150) icmp_seq=7 Destination Host Unreachable From racnode1.example.com (172.16.1.150) icmp_seq=8 Destination Host Unreachable ^C --- racnode-scan.example.com ping statistics --- 9 packets transmitted, 0 received, +8 errors, 100% packet loss, time 8002ms pipe 4 [grid@racnode1 ~]$ ifconfig -a eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.17.150 netmask 255.255.255.0 broadcast 192.168.17.255 ether 02:42:c0:a8:11:96 txqueuelen 0 (Ethernet) RX packets 294195 bytes 25572884 (24.3 MiB) RX errors 0 dropped 2712 overruns 0 frame 0 TX packets 301 bytes 38338 (37.4 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.16.1.150 netmask 255.255.255.0 broadcast 172.16.1.255 ether 02:42:ac:10:01:96 txqueuelen 0 (Ethernet) RX packets 379482 bytes 30682652 (29.2 MiB) RX errors 0 dropped 3617 overruns 0 frame 0 TX packets 39 bytes 1638 (1.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 0 (Local Loopback) RX packets 2758 bytes 480116 (468.8 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2758 bytes 480116 (468.8 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Please find the zip file below: GridSetupActions2019-08-14_07-20-24PM.tgz.zip

babloo2642 commented 5 years ago

@psaini79

I have followed the oracle guidelines ” Password Management” under README of OracleRealApplicationClusters and Edited the /opt/.secrets/common_os_pwdfile to seed the password for grid/oracle and database. Do you think this is good?

psaini79 commented 5 years ago

@babloo2642

Can you check if /tmp/grid.rsp still there in racnode1. If yes, please share the file. Otherwise, I will give you steps to proceed manually.

psaini79 commented 5 years ago

@babloo2642

Also, please upload following file along with response file. If response file is not there, you can upload following file: upload file /u01/app/19.3.0/grid/crs/install/crsconfig_params

psaini79 commented 5 years ago

@babloo2642

You need to replace 19.3.0 with 12.2.0.1.

babloo2642 commented 5 years ago

@psaini79

Yes, /tmp/grid.rsp is still there in racnode1 Please find the zip files below: grid.rsp.tgz.zip crsconfig_params.tgz.zip

psaini79 commented 5 years ago

@babloo2642

Please do following as a root user inside the racnode1 container: 1) rm -f /tmp/grid.rsp 2) Edit /opt/scripts/startup/grid.rsp and replace oracle.install.crs.config.gpnp.gnsVIPAddress=###GNSVIP_ADDRESS### to oracle.install.crs.config.gpnp.gnsVIPAddress= 2) Execute following command as a root user and provide the screen output: sh -x /opt/scripts/startup/runOracle.sh

If it fails, base don output I will request new logs.

babloo2642 commented 5 years ago

@psaini79

It got failed. Please find the output below.

bash-4.2# sh -x /opt/scripts/startup/runOracle.sh

--- racnode1.example.com ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 3998ms rtt min/avg/max/mdev = 0.028/0.036/0.043/0.006 ms Remote host reachability check succeeded. The following hosts are reachable: racnode1. The following hosts are not reachable: . All hosts are reachable. Proceeding further... firsthost racnode1 numhosts 1 The script will setup SSH connectivity from the host racnode1 to all the remote hosts. After the script is executed, the user can use SSH to run commands on the remote hosts or copy files between this host racnode1 and the remote hosts without being prompted for passwords or confirmations.

NOTE 1: As part of the setup procedure, this script will use ssh and scp to copy files between the local host and the remote hosts. Since the script does not store passwords, you may be prompted for the passwords during the execution of the script whenever ssh or scp is invoked.

NOTE 2: AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEGES TO THESE directories.

Do you want to continue and let the script make the above mentioned changes (yes/no)? Confirmation provided on the command line

The user chose yes User chose to skip passphrase related questions. Creating .ssh directory on local host, if not present already Creating authorized_keys file on local host Changing permissions on authorized_keys to 644 on local host Creating known_hosts file on local host Changing permissions on known_hosts to 644 on local host Creating config file on local host If a config file exists already at /home/grid/.ssh/config, it would be backed up to /home/grid/.ssh/config.backup. Creating .ssh directory and setting permissions on remote host racnode1 THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR grid. THIS IS AN SSH REQUIREMENT. The script would create ~grid/.ssh/config file on remote host racnode1. If a config file exists already at ~grid/.ssh/config, it would be backed up to ~grid/.ssh/config.backup. The user may be prompted for a password here since the script would be running SSH on host racnode1. Warning: Permanently added 'racnode1,172.16.1.150' (ECDSA) to the list of known hosts. grid@racnode1's password: Done with creating .ssh directory and setting permissions on remote host racnode1. Copying local host public key to the remote host racnode1 The user may be prompted for a password or passphrase here since the script would be using SCP for host racnode1. grid@racnode1's password: Done copying local host public key to the remote host racnode1 Creating keys on remote host racnode1 if they do not exist already. This is required to setup SSH on host racnode1.

Updating authorized_keys file on remote host racnode1 Updating known_hosts file on remote host racnode1 cat: /home/grid/.ssh/known_hosts.tmp: No such file or directory cat: /home/grid/.ssh/authorized_keys.tmp: No such file or directory SSH setup is complete.


Verifying SSH setup

The script will now run the date command on the remote nodes using ssh to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP, THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR PASSWORDS. If you see any output other than date or are prompted for the password, ssh is not setup correctly and you will need to resolve the issue and set up ssh again. The possible causes for failure could be:

  1. The server settings in /etc/ssh/sshd_config file do not allow ssh for user grid.
  2. The server may have disabled public key based authentication.
  3. The client public key on the server may be outdated.
  4. ~grid or ~grid/.ssh on the remote host may not be owned by grid.
  5. User may not have passed -shared option for shared remote users or may be passing the -shared option for non-shared remote users.
  6. If there is output in addition to the date, but no password is asked, it may be a security alert shown as part of company policy. Append the additional text to the /sysman/prov/resources/ignoreMessages.txt file.

    --racnode1:-- Running /usr/bin/ssh -x -l grid racnode1 date to verify SSH connectivity has been setup from local host to racnode1. IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR. Thu Aug 15 05:55:00 UTC 2019


    Verifying SSH connectivity has been setup from racnode1 to racnode1

    IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Thu Aug 15 05:55:00 UTC 2019

    -Verification from racnode1 complete- SSH verification complete. spawn /export/app/oracle/product/12.2.0/dbhome_1/oui/prov/resources/scripts/sshUserSetup.sh -user oracle -hosts racnode1 -logfile /tmp/oracle_SetupSSH.log -advanced -exverify -noPromptPassphrase -confirm The output of this script is also logged into /tmp/oracle_SetupSSH.log Hosts are racnode1 user is oracle Platform:- Linux Checking if the remote hosts are reachable PING racnode1.example.com (172.16.1.150) 56(84) bytes of data. 64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=1 ttl=64 time=0.028 ms 64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=2 ttl=64 time=0.036 ms 64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=3 ttl=64 time=0.035 ms 64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=4 ttl=64 time=0.036 ms 64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=5 ttl=64 time=0.040 ms

--- racnode1.example.com ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4000ms rtt min/avg/max/mdev = 0.028/0.035/0.040/0.004 ms Remote host reachability check succeeded. The following hosts are reachable: racnode1. The following hosts are not reachable: . All hosts are reachable. Proceeding further... firsthost racnode1 numhosts 1 The script will setup SSH connectivity from the host racnode1 to all the remote hosts. After the script is executed, the user can use SSH to run commands on the remote hosts or copy files between this host racnode1 and the remote hosts without being prompted for passwords or confirmations.

NOTE 1: As part of the setup procedure, this script will use ssh and scp to copy files between the local host and the remote hosts. Since the script does not store passwords, you may be prompted for the passwords during the execution of the script whenever ssh or scp is invoked.

NOTE 2: AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEGES TO THESE directories.

Do you want to continue and let the script make the above mentioned changes (yes/no)? Confirmation provided on the command line

The user chose yes User chose to skip passphrase related questions. Creating .ssh directory on local host, if not present already Creating authorized_keys file on local host Changing permissions on authorized_keys to 644 on local host Creating known_hosts file on local host Changing permissions on known_hosts to 644 on local host Creating config file on local host If a config file exists already at /home/oracle/.ssh/config, it would be backed up to /home/oracle/.ssh/config.backup. Creating .ssh directory and setting permissions on remote host racnode1 THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR oracle. THIS IS AN SSH REQUIREMENT. The script would create ~oracle/.ssh/config file on remote host racnode1. If a config file exists already at ~oracle/.ssh/config, it would be backed up to ~oracle/.ssh/config.backup. The user may be prompted for a password here since the script would be running SSH on host racnode1. Warning: Permanently added 'racnode1,172.16.1.150' (ECDSA) to the list of known hosts. oracle@racnode1's password: Done with creating .ssh directory and setting permissions on remote host racnode1. Copying local host public key to the remote host racnode1 The user may be prompted for a password or passphrase here since the script would be using SCP for host racnode1. oracle@racnode1's password: Done copying local host public key to the remote host racnode1 Creating keys on remote host racnode1 if they do not exist already. This is required to setup SSH on host racnode1.

Updating authorized_keys file on remote host racnode1 Updating known_hosts file on remote host racnode1 cat: /home/oracle/.ssh/known_hosts.tmp: No such file or directory cat: /home/oracle/.ssh/authorized_keys.tmp: No such file or directory SSH setup is complete.


Verifying SSH setup

The script will now run the date command on the remote nodes using ssh to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP, THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR PASSWORDS. If you see any output other than date or are prompted for the password, ssh is not setup correctly and you will need to resolve the issue and set up ssh again. The possible causes for failure could be:

  1. The server settings in /etc/ssh/sshd_config file do not allow ssh for user oracle.
  2. The server may have disabled public key based authentication.
  3. The client public key on the server may be outdated.
  4. ~oracle or ~oracle/.ssh on the remote host may not be owned by oracle.
  5. User may not have passed -shared option for shared remote users or may be passing the -shared option for non-shared remote users.
  6. If there is output in addition to the date, but no password is asked, it may be a security alert shown as part of company policy. Append the additional text to the /sysman/prov/resources/ignoreMessages.txt file.

    --racnode1:-- Running /usr/bin/ssh -x -l oracle racnode1 date to verify SSH connectivity has been setup from local host to racnode1. IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR. Thu Aug 15 05:55:35 UTC 2019


    Verifying SSH connectivity has been setup from racnode1 to racnode1

    IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Thu Aug 15 05:55:36 UTC 2019

    -Verification from racnode1 complete- SSH verification complete. su - $GRID_USER -c "ssh -o BatchMode=yes -o ConnectTimeout=5 $GRID_USER@$node echo ok 2>&1" su - $ORACLE_USER -c "ssh -o BatchMode=yes -o ConnectTimeout=5 $ORACLE_USER@$node echo ok 2>&1" -bash: /etc/rac_env_vars: Permission denied -bash: /etc/rac_env_vars: Permission denied

Verifying Physical Memory ...PASSED Verifying Available Physical Memory ...PASSED Verifying Swap Size ...PASSED Verifying Free Space: racnode1:/usr,racnode1:/var,racnode1:/etc,racnode1:/sbin,racnode1:/tmp,racnode1:/export/app/grid ...PASSED Verifying User Existence: grid ... Verifying Users With Same UID: 54332 ...PASSED Verifying User Existence: grid ...PASSED Verifying Group Existence: asmadmin ...PASSED Verifying Group Existence: dba ...PASSED Verifying Group Existence: oinstall ...PASSED Verifying Group Membership: dba ...PASSED Verifying Group Membership: asmadmin ...PASSED Verifying Group Membership: oinstall(Primary) ...PASSED Verifying Run Level ...PASSED Verifying Hard Limit: maximum open file descriptors ...PASSED Verifying Soft Limit: maximum open file descriptors ...PASSED Verifying Hard Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum stack size ...PASSED Verifying Architecture ...PASSED Verifying OS Kernel Version ...PASSED Verifying OS Kernel Parameter: semmsl ...PASSED Verifying OS Kernel Parameter: semmns ...PASSED Verifying OS Kernel Parameter: semopm ...PASSED Verifying OS Kernel Parameter: semmni ...PASSED Verifying OS Kernel Parameter: shmmax ...PASSED Verifying OS Kernel Parameter: shmmni ...PASSED Verifying OS Kernel Parameter: shmall ...PASSED Verifying OS Kernel Parameter: file-max ...PASSED Verifying OS Kernel Parameter: aio-max-nr ...PASSED Verifying OS Kernel Parameter: panic_on_oops ...PASSED Verifying Package: binutils-2.23.52.0.1 ...PASSED Verifying Package: compat-libcap1-1.10 ...PASSED Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED Verifying Package: sysstat-10.1.5 ...PASSED Verifying Package: ksh ...PASSED Verifying Package: make-3.82 ...PASSED Verifying Package: glibc-2.17 (x86_64) ...PASSED Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED Verifying Package: libaio-0.3.109 (x86_64) ...PASSED Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED Verifying Package: nfs-utils-1.2.3-15 ...PASSED Verifying Package: smartmontools-6.2-4 ...PASSED Verifying Package: net-tools-2.0-0.17 ...PASSED Verifying Port Availability for component "Oracle Remote Method Invocation (ORMI)" ...PASSED Verifying Port Availability for component "Oracle Notification Service (ONS)" ...PASSED Verifying Port Availability for component "Oracle Cluster Synchronization Services (CSSD)" ...PASSED Verifying Port Availability for component "Oracle Notification Service (ONS) Enterprise Manager support" ...PASSED Verifying Port Availability for component "Oracle Database Listener" ...PASSED Verifying Users With Same UID: 0 ...PASSED Verifying Current Group ID ...PASSED Verifying Root user consistency ...PASSED Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying Node Connectivity ...PASSED Verifying Multicast check ...PASSED Verifying ASM Integrity ... Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying Node Connectivity ...PASSED Verifying ASM Integrity ...PASSED Verifying Device Checks for ASM ... Verifying ASM device sharedness check ... Verifying Package: cvuqdisk-1.0.10-1 ...PASSED Verifying Shared Storage Accessibility:/dev/asm_disk2,/dev/asm_disk1 ...WARNING (PRVG-1615) Verifying ASM device sharedness check ...WARNING (PRVG-1615) Verifying Access Control List check ...PASSED Verifying Device Checks for ASM ...WARNING (PRVG-1615) Verifying I/O scheduler ... Verifying Package: cvuqdisk-1.0.10-1 ...PASSED Verifying I/O scheduler ...PASSED Verifying Network Time Protocol (NTP) ... Verifying '/etc/ntp.conf' ...PASSED Verifying '/var/run/ntpd.pid' ...PASSED Verifying '/var/run/chronyd.pid' ...PASSED Verifying Network Time Protocol (NTP) ...FAILED Verifying Same core file name pattern ...PASSED Verifying User Mask ...PASSED Verifying User Not In Group "root": grid ...PASSED Verifying Time zone consistency ...PASSED Verifying VIP Subnet configuration check ...PASSED Verifying resolv.conf Integrity ... Verifying (Linux) resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying DNS/NIS name service ... Verifying Name Service Switch Configuration File Integrity ...PASSED Verifying DNS/NIS name service ...FAILED (PRVG-1101) Verifying Single Client Access Name (SCAN) ...PASSED Verifying Domain Sockets ...PASSED Verifying /boot mount ...PASSED Verifying Daemon "avahi-daemon" not configured and running ...PASSED Verifying Daemon "proxyt" not configured and running ...PASSED Verifying loopback network interface address ...PASSED Verifying Oracle base: /export/app/grid ... Verifying '/export/app/grid' ...PASSED Verifying Oracle base: /export/app/grid ...PASSED Verifying User Equivalence ...PASSED Verifying Network interface bonding status of private interconnect network interfaces ...PASSED Verifying File system mount options for path /var ...PASSED Verifying zeroconf check ...PASSED Verifying ASM Filter Driver configuration ...PASSED

Pre-check for cluster services setup was unsuccessful on all the nodes.

Warnings were encountered during execution of CVU verification request "stage -pre crsinst".

Verifying Device Checks for ASM ...WARNING Verifying ASM device sharedness check ...WARNING Verifying Shared Storage Accessibility:/dev/asm_disk2,/dev/asm_disk1 ...WARNING PRVG-1615 : Virtual environment detected. Skipping shared storage check for disks "/dev/asm_disk2,/dev/asm_disk1".

Verifying Network Time Protocol (NTP) ...FAILED Verifying resolv.conf Integrity ...FAILED racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11". racnode1: Check for integrity of file "/etc/resolv.conf" failed

Verifying (Linux) resolv.conf Integrity ...FAILED racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11".

Verifying DNS/NIS name service ...FAILED PRVG-1101 : SCAN name "racnode-scan" failed to resolve

CVU operation performed: stage -pre crsinst Date: Aug 15, 2019 5:55:38 AM CVU home: /export/app/12.2.0/grid/ User: grid Launching Oracle Grid Infrastructure Setup Wizard...

[FATAL] [INS-30004] The SYS password entered is invalid. CAUSE: Passwords may contain only alphanumeric characters from the chosen database character set, underscore (), dollar sign ($), or pound sign (#). ACTION: Provide a password as per recommendations. [FATAL] [INS-30004] The ASMSNMP password entered is invalid. CAUSE: Passwords may contain only alphanumeric characters from the chosen database character set, underscore (), dollar sign ($), or pound sign (#). ACTION: Provide a password as per recommendations. Check /export/app/12.2.0/grid/install/root_racnode1_2019-08-15_05-56-50-556906853.log for the output of root script Launching Oracle Grid Infrastructure Setup Wizard...

The configuration that needs to be performed as privileged user is not completed. The configuration tools can only be executed after that. You can find the logs of this session at: /export/app/oraInventory/logs/GridSetupActions2019-08-15_05-56-50AM

As a root user, execute the following script(s):

  1. /export/app/12.2.0/grid/root.sh

Execute /export/app/12.2.0/grid/root.sh on the following nodes: [racnode1]

After the successful root script execution, proceed to re-run the same 'gridSetup.sh -executeConfigTools' command.

Successfully Configured Software. [FATAL] [DBT-10602] (Oracle Real Application Cluster (RAC) database) database cannot be created in this system. CAUSE: Oracle Grid Infrastructure is not configured on the system. ACTION: Configure Oracle Grid Infrastructure prior to creation of (Oracle Real Application Cluster (RAC) database). Refer to Oracle Grid Infrastructure Installation Guide for installation and configuration steps. Checking on racnode1

psaini79 commented 5 years ago

Please execute following as a root user on racnode1 and upload logs:

su - grid -c "/export/app/12.2.0/grid/gridSetup.sh -waitforcompletion -ignorePrereq -silent -responseFile /tmp/grid.rsp"

/export/app/12.2.0/grid//root.sh

Upload following logs:

/export/app/12.2.0/grid/install/root_racnode1_2019-08-15_*.log
/export/app/oraInventory/logs/GridSetupActions2019-08-15_*
/tmp/grid.rsp
/u01/app/19.3.0/grid/crs/install/crsconfig_params
babloo2642 commented 5 years ago

@psaini79

Actually, the password contains special character other than underscore (_), dollar sign ($), or pound sign (#). Do you want me to change it and try again?

bash-4.2# su - grid -c "/export/app/12.2.0/grid/gridSetup.sh -waitforcompletion -ignorePrereq -silent -responseFile /tmp/grid.rsp" Launching Oracle Grid Infrastructure Setup Wizard...

[FATAL] [INS-30004] The SYS password entered is invalid. CAUSE: Passwords may contain only alphanumeric characters from the chosen database character set, underscore (), dollar sign ($), or pound sign (#). ACTION: Provide a password as per recommendations. [FATAL] [INS-30004] The ASMSNMP password entered is invalid. CAUSE: Passwords may contain only alphanumeric characters from the chosen database character set, underscore (), dollar sign ($), or pound sign (#). ACTION: Provide a password as per recommendations.

psaini79 commented 5 years ago

@babloo2642

Please do following on Docker host: 1) cd /opt/.secrets 2) docker exec racnode1 /bin/sh -c 'ls -ltr /run/secrets' 3) rm -f /opt/.secrets/* 4) docker exec racnode1 /bin/sh -c 'ls -ltr /run/secrets' Note: No file must be there on racnode1 under /run/secrets 5) vi /opt/.secrets/common_os_pwdfile and seed your password. It must be as per Oracle standard. 6) openssl rand -hex 64 -out /opt/.secrets/pwd.key 7) openssl enc -aes-256-cbc -salt -in /opt/.secrets/common_os_pwdfile -out /opt/.secrets/common_os_pwdfile.enc -pass file:/opt/.secrets/pwd.key 8) docker exec racnode1 /bin/sh -c 'ls -ltr /run/secrets' Note: New files must be there 9) rm -f /opt/.secrets/common_os_pwdfile

Execute following steps: docker exec -i -t racnode1 /bin/bash sudo /bin/bash sh -x /opt/scripts/startup/runOracle.sh

Provide me the status after the above steps.

babloo2642 commented 5 years ago

@psaini79

Please find the output:

[grid@racnode1 ~]$ sudo /bin/bash bash-4.2# sh -x /opt/scripts/startup/runOracle.sh

psaini79 commented 5 years ago

@babloo2642

Please execute following steps and share output: docker exec -i -t racnode1 /bin/bash crsctl check cluster ps -u grid

Exit from the container and paste following output: docker logs -f racnode1

babloo2642 commented 5 years ago

@psaini79

Please find the output below.

[grid@racnode1 ~]$ crsctl check cluster CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online [grid@racnode1 ~]$ ps -u grid PID TTY TIME CMD 317 ? 00:00:00 mdbw007-mgmtd 1661 ? 00:00:14 oraagent.bin 1702 ? 00:00:00 tnslsnr 3135 ? 00:00:00 tnslsnr 3720 ? 00:00:00 mdbo000-mgmtd 3722 ? 00:00:00 oracle3722+as 3851 ? 00:00:00 oracle3851-mg 3852 pts/2 00:00:00 ps 4346 ? 00:00:00 oracle4346+as 4426 ? 00:00:00 oracle4426+as 4566 ? 00:00:00 ons 4567 ? 00:00:00 ons 4718 ? 00:00:00 tnslsnr 4749 pts/1 00:00:00 bash 4898 ? 00:00:02 scriptagent.bin 4919 ? 00:00:09 java 5148 ? 00:00:03 java 19501 ? 00:00:00 tnslsnr 19660 ? 00:00:00 mdbpmon-mgmtd 19662 ? 00:00:00 mdbclmn-mgmtd 19664 ? 00:00:00 mdbpsp0-mgmtd 19667 ? 00:00:00 mdbvktm-mgmtd 19671 ? 00:00:00 mdbgen0-mgmtd 19673 ? 00:00:00 mdbmman-mgmtd 19677 ? 00:00:00 mdbscmn-mgmtd 19681 ? 00:00:00 mdbdiag-mgmtd 19683 ? 00:00:00 mdbscmn-mgmtd 19687 ? 00:00:00 mdbdbrm-mgmtd 19689 ? 00:00:00 mdbvkrm-mgmtd 19691 ? 00:00:00 mdbsvcb-mgmtd 19693 ? 00:00:00 mdbpman-mgmtd 19695 ? 00:00:00 mdbdia0-mgmtd 19697 ? 00:00:00 mdbdbw0-mgmtd 19700 ? 00:00:00 mdblgwr-mgmtd 19702 ? 00:00:00 mdbckpt-mgmtd 19704 ? 00:00:00 mdblg00-mgmtd 19706 ? 00:00:00 mdbsmon-mgmtd 19708 ? 00:00:00 mdblg01-mgmtd 19710 ? 00:00:00 mdbsmco-mgmtd 19712 ? 00:00:00 mdblreg-mgmtd 19714 ? 00:00:00 mdbw000-mgmtd 19716 ? 00:00:00 mdbpxmn-mgmtd 19718 ? 00:00:00 mdbw001-mgmtd 19720 ? 00:00:00 mdbrbal-mgmtd 19722 ? 00:00:00 mdbasmb-mgmtd 19724 ? 00:00:00 mdbfenc-mgmtd 19726 ? 00:00:04 mdbmmon-mgmtd 19729 ? 00:00:00 mdbmmnl-mgmtd 19730 ? 00:00:00 oracle19730+a 19732 ? 00:00:00 mdbd000-mgmtd 19734 ? 00:00:00 mdbs000-mgmtd 19736 ? 00:00:00 mdbtmon-mgmtd 19738 ? 00:00:00 mdbmark-mgmtd 19805 ? 00:00:00 mdbtt00-mgmtd 19807 ? 00:00:00 mdbtt01-mgmtd 19809 ? 00:00:00 mdbtt02-mgmtd 19825 ? 00:00:00 mdbaqpc-mgmtd 19842 ? 00:00:05 mdbcjq0-mgmtd 19991 ? 00:00:00 oracle19991-m 20579 ? 00:00:00 mdbqm02-mgmtd 20586 ? 00:00:00 mdbq003-mgmtd 21325 ? 00:00:01 mdbp000-mgmtd 21327 ? 00:00:01 mdbp001-mgmtd 21329 ? 00:00:00 mdbp002-mgmtd 21565 ? 00:00:00 oracle21565+a 21680 ? 00:00:00 mdbq004-mgmtd 21735 ? 00:00:00 mdbw002-mgmtd 21879 ? 00:00:00 mdbp003-mgmtd 22004 ? 00:00:00 oracle22004-m 22672 ? 00:00:00 mdbw003-mgmtd 23761 ? 00:00:00 oracle23761-m 23856 ? 00:00:22 java 24553 ? 00:00:01 oracle24553-m 24577 ? 00:00:00 oracle24577-m 24708 ? 00:00:00 mdbw004-mgmtd 25722 ? 00:00:01 oracle25722-m 25746 ? 00:00:00 oracle25746-m 30209 ? 00:00:00 mdbw005-mgmtd 30464 ? 00:00:06 oraagent.bin 30482 ? 00:00:02 mdnsd.bin 30488 ? 00:00:07 evmd.bin 30526 ? 00:00:02 gpnpd.bin 30562 ? 00:00:02 evmlogger.bin 30574 ? 00:00:07 gipcd.bin 30641 ? 00:00:11 ocssd.bin 31675 ? 00:00:00 mdbw006-mgmtd 31708 ? 00:00:00 asmpmon+asm1 31710 ? 00:00:00 asmclmn+asm1 31712 ? 00:00:00 asmpsp0+asm1 31714 ? 00:00:00 asmvktm+asm1 31718 ? 00:00:00 asmgen0+asm1 31720 ? 00:00:00 asmmman+asm1 31724 ? 00:00:00 asmscmn+asm1 31728 ? 00:00:01 asmdiag+asm1 31730 ? 00:00:00 asmping+asm1 31732 ? 00:00:00 asmpman+asm1 31734 ? 00:00:05 asmdia0+asm1 31736 ? 00:00:02 asmlmon+asm1 31738 ? 00:00:01 asmlmd0+asm1 31740 ? 00:00:04 asmlms0+asm1 31744 ? 00:00:01 asmlmhb+asm1 31746 ? 00:00:00 asmlck1+asm1 31748 ? 00:00:00 asmdbw0+asm1 31750 ? 00:00:00 asmlgwr+asm1 31752 ? 00:00:00 asmckpt+asm1 31754 ? 00:00:00 asmsmon+asm1 31756 ? 00:00:00 asmlreg+asm1 31758 ? 00:00:00 asmpxmn+asm1 31760 ? 00:00:00 asmrbal+asm1 31762 ? 00:00:00 asmgmon+asm1 31764 ? 00:00:00 asmmmon+asm1 31766 ? 00:00:01 asmmmnl+asm1 31768 ? 00:00:03 asmimr0+asm1 31770 ? 00:00:00 asmlck0+asm1 31774 ? 00:00:02 asmgcr0+asm1 31863 ? 00:00:00 oracle31863+a 31888 ? 00:00:00 asmasmb+asm1 31891 ? 00:00:00 oracle31891+a 31912 ? 00:00:01 oracle31912+a 32537 pts/2 00:00:00 bash [grid@racnode1 ~]$ exit

[root@docker ~]# docker logs -f racnode1 systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN) Detected virtualization other. Detected architecture x86-64.

Welcome to Oracle Linux Server 7.6!

Set hostname to . Failed to parse kernel command line, ignoring: No such file or directoryFailed to parse kernel command line, ignoring: No such file or directoryFailed to parse kernel command line, ignoring: No such file or directory

/usr/lib/systemd/system-generators/systemd-fstab-generator failed with error code 1. Binding to IPv6 address not available since kernel does not support IPv6. Binding to IPv6 address not available since kernel does not support IPv6. Cannot add dependency job for unit display-manager.service, ignoring: Unit not found. [ OK ] Reached target Swap. [ OK ] Reached target RPC Port Mapper. [ OK ] Created slice Root Slice. [ OK ] Listening on Delayed Shutdown Socket. [ OK ] Listening on /dev/initctl Compatibility Named Pipe. [ OK ] Created slice User and Session Slice. [ OK ] Listening on Journal Socket. [ OK ] Created slice System Slice. Starting Journal Service... [ OK ] Created slice system-getty.slice. [ OK ] Reached target Local Encrypted Volumes. Starting Read and set NIS domainname from /etc/sysconfig/network... [ OK ] Reached target Slices. Couldn't determine result for ConditionKernelCommandLine=|rd.modules-load for systemd-modules-load.service, assuming failed: No such file or directory Couldn't determine result for ConditionKernelCommandLine=|modules-load for systemd-modules-load.service, assuming failed: No such file or directory [ OK ] Reached target Local File Systems (Pre). Starting Configure read-only root support... Starting Rebuild Hardware Database... [ OK ] Started Dispatch Password Requests to Console Directory Watch. [ OK ] Started Forward Password Requests to Wall Directory Watch. [ OK ] Started Journal Service. [ OK ] Started Read and set NIS domainname from /etc/sysconfig/network. [ OK ] Started Configure read-only root support. Starting Load/Save Random Seed... [ OK ] Reached target Local File Systems. Starting Mark the need to relabel after reboot... Starting Rebuild Journal Catalog... Starting Preprocess NFS configuration... Starting Flush Journal to Persistent Storage... [ OK ] Started Load/Save Random Seed. [ OK ] Started Mark the need to relabel after reboot. [ OK ] Started Rebuild Journal Catalog. [ OK ] Started Preprocess NFS configuration. [ OK ] Started Flush Journal to Persistent Storage. Starting Create Volatile Files and Directories... [ OK ] Started Create Volatile Files and Directories. Starting Update UTMP about System Boot/Shutdown... Mounting RPC Pipe File System... [FAILED] Failed to mount RPC Pipe File System. See 'systemctl status var-lib-nfs-rpc_pipefs.mount' for details. [DEPEND] Dependency failed for rpc_pipefs.target. [DEPEND] Dependency failed for RPC security service for NFS client and server. [ OK ] Started Update UTMP about System Boot/Shutdown. [ OK ] Started Rebuild Hardware Database. Starting Update is Completed... [ OK ] Started Update is Completed. [ OK ] Reached target System Initialization. [ OK ] Started Flexible branding. [ OK ] Reached target Paths. [ OK ] Listening on D-Bus System Message Bus Socket. [ OK ] Started Daily Cleanup of Temporary Directories. [ OK ] Reached target Timers. [ OK ] Listening on RPCbind Server Activation Socket. [ OK ] Reached target Sockets. [ OK ] Reached target Basic System. Starting GSSAPI Proxy Daemon... Starting Login Service... Starting LSB: Bring up/down networking... Starting Resets System Activity Logs... [ OK ] Started Self Monitoring and Reporting Technology (SMART) Daemon. Starting OpenSSH Server Key Generation... [ OK ] Started D-Bus System Message Bus. Starting RPC bind service... Starting Cleanup of Temporary Directories... [ OK ] Started GSSAPI Proxy Daemon. [ OK ] Started RPC bind service. [ OK ] Started Cleanup of Temporary Directories. [ OK ] Started Login Service. [ OK ] Reached target NFS client services. [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. Starting Permit User Sessions... [ OK ] Started Resets System Activity Logs. [ OK ] Started Permit User Sessions. [ OK ] Started Command Scheduler. [ OK ] Started OpenSSH Server Key Generation. [ OK ] Started LSB: Bring up/down networking. [ OK ] Reached target Network. Starting /etc/rc.d/rc.local Compatibility... Starting OpenSSH server daemon... [ OK ] Reached target Network is Online. Starting Notify NFS peers of a restart... [ OK ] Started /etc/rc.d/rc.local Compatibility. [ OK ] Started Console Getty. [ OK ] Reached target Login Prompts. [ OK ] Started Notify NFS peers of a restart. [ OK ] Started OpenSSH server daemon. [ OK ] Reached target Multi-User System. [ OK ] Reached target Graphical Interface. Starting Update UTMP about System Runlevel Changes... 07-29-2019 11:39:44 UTC : : Process id of the program : 07-29-2019 11:39:44 UTC : : ################################################# 07-29-2019 11:39:44 UTC : : Starting Grid Installation
07-29-2019 11:39:44 UTC : : ################################################# 07-29-2019 11:39:44 UTC : : Pre-Grid Setup steps are in process 07-29-2019 11:39:44 UTC : : Process id of the program : 07-29-2019 11:39:44 UTC : : Disable failed service var-lib-nfs-rpc_pipefs.mount Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory 07-29-2019 11:39:44 UTC : : Resetting Failed Services 07-29-2019 11:39:44 UTC : : Sleeping for 60 seconds

Oracle Linux Server 7.6 Kernel 4.1.12-124.25.1.el7uek.x86_64 on an x86_64

racnode1 login: 07-29-2019 11:40:44 UTC : : Systemctl state is running! 07-29-2019 11:40:44 UTC : : Setting correct permissions for /bin/ping 07-29-2019 11:40:44 UTC : : Public IP is set to 172.16.1.150 07-29-2019 11:40:44 UTC : : RAC Node PUBLIC Hostname is set to racnode1 07-29-2019 11:40:44 UTC : : racnode1 already exists : 172.16.1.150 racnode1.example.coracnode1 192.168.17.150 racnode1-priv.example.com racnode1-priv 172.16.1.160 racnode1-vip.example.com racnode1-vip, no update required 07-29-2019 11:40:44 UTC : : racnode1-priv already exists : 192.168.17.150 racnode1-priv.example.com racnode1-priv, no update required 07-29-2019 11:40:44 UTC : : racnode1-vip already exists : 172.16.1.160 racnode1-vip.example.com racnode1-vip, no update required 07-29-2019 11:40:44 UTC : : racnode-scan already exists : 172.16.1.70 racnode-scan.example.com racnode-scan, no update required 07-29-2019 11:40:44 UTC : : racnode-cman1 already exists : 172.16.1.15 racnode-cman1.example.com racnode-cman1, no update required 07-29-2019 11:40:44 UTC : : Preapring Device list 07-29-2019 11:40:44 UTC : : Changing Disk permission and ownership /dev/asm_disk1 07-29-2019 11:40:44 UTC : : Changing Disk permission and ownership /dev/asm_disk2 07-29-2019 11:40:44 UTC : : ##################################################################### 07-29-2019 11:40:44 UTC : : RAC setup will begin in 2 minutes
07-29-2019 11:40:44 UTC : : #################################################################### 07-29-2019 11:40:46 UTC : : ################################################### 07-29-2019 11:40:46 UTC : : Pre-Grid Setup steps completed 07-29-2019 11:40:46 UTC : : ################################################### 07-29-2019 11:40:46 UTC : : Checking if grid is already configured 07-29-2019 11:40:46 UTC : : Process id of the program : 07-29-2019 11:40:46 UTC : : Public IP is set to 172.16.1.150 07-29-2019 11:40:46 UTC : : RAC Node PUBLIC Hostname is set to racnode1 07-29-2019 11:40:46 UTC : : Domain is defined to example.com 07-29-2019 11:40:46 UTC : : Default setting of AUTO GNS VIP set to false. If you want to use AUTO GNS VIP, please pass DHCP_CONF as an env parameter set to true 07-29-2019 11:40:46 UTC : : RAC VIP set to 172.16.1.160 07-29-2019 11:40:46 UTC : : RAC Node VIP hostname is set to racnode1-vip 07-29-2019 11:40:46 UTC : : SCAN_NAME name is racnode-scan 07-29-2019 11:40:46 UTC : : SCAN PORT is set to empty string. Setting it to 1521 port. 07-29-2019 11:41:06 UTC : : 172.16.1.70 07-29-2019 11:41:06 UTC : : SCAN Name resolving to IP. Check Passed! 07-29-2019 11:41:06 UTC : : SCAN_IP name is 172.16.1.70 07-29-2019 11:41:06 UTC : : RAC Node PRIV IP is set to 192.168.17.150 07-29-2019 11:41:06 UTC : : RAC Node private hostname is set to racnode1-priv 07-29-2019 11:41:06 UTC : : CMAN_HOSTNAME name is racnode-cman1 07-29-2019 11:41:06 UTC : : CMAN_IP name is 172.16.1.15 07-29-2019 11:41:06 UTC : : Cluster Name is not defined 07-29-2019 11:41:06 UTC : : Cluster name is set to 'racnode-c' 07-29-2019 11:41:06 UTC : : Password file generated 07-29-2019 11:41:06 UTC : : Common OS Password string is set for Grid user 07-29-2019 11:41:06 UTC : : Common OS Password string is set for Oracle user 07-29-2019 11:41:06 UTC : : Common OS Password string is set for Oracle Database 07-29-2019 11:41:06 UTC : : Setting CONFIGURE_GNS to false 07-29-2019 11:41:06 UTC : : GRID_RESPONSE_FILE env variable set to empty. configGrid.sh will use standard cluster responsefile 07-29-2019 11:41:06 UTC : : Location for User script SCRIPT_ROOT set to /common_scripts 07-29-2019 11:41:06 UTC : : IGNORE_CVU_CHECKS is set to true 07-29-2019 11:41:06 UTC : : Oracle SID is set to ORCLCDB 07-29-2019 11:41:06 UTC : : Oracle PDB name is set to ORCLPDB 07-29-2019 11:41:06 UTC : : Check passed for network card eth1 for public IP 172.16.1.150 07-29-2019 11:41:06 UTC : : Public Netmask : 255.255.255.0 07-29-2019 11:41:06 UTC : : Check passed for network card eth0 for private IP 192.168.17.150 07-29-2019 11:41:06 UTC : : Building NETWORK_STRING to set networkInterfaceList in Grid Response File 07-29-2019 11:41:06 UTC : : Network InterfaceList set to eth1:172.16.1.0:1,eth0:192.168.17.0:5 07-29-2019 11:41:06 UTC : : Setting random password for grid user 07-29-2019 11:41:06 UTC : : Setting random password for oracle user 07-29-2019 11:41:06 UTC : : Calling setupSSH function 07-29-2019 11:41:06 UTC : : SSh will be setup among racnode1 nodes 07-29-2019 11:41:06 UTC : : Running SSH setup for grid user between nodes racnode1 07-29-2019 11:41:43 UTC : : Running SSH setup for oracle user between nodes racnode1 07-29-2019 11:41:49 UTC : : SSH check fine for the racnode1 07-29-2019 11:41:49 UTC : : SSH check fine for the oracle@racnode1 07-29-2019 11:41:49 UTC : : Preapring Device list 07-29-2019 11:41:49 UTC : : Changing Disk permission and ownership 07-29-2019 11:41:49 UTC : : Changing Disk permission and ownership 07-29-2019 11:41:49 UTC : : ASM Disk size : 0 07-29-2019 11:41:49 UTC : : ASM Device list will be with failure groups /dev/asm_disk1,,/dev/asm_disk2, 07-29-2019 11:41:49 UTC : : ASM Device list will be groups /dev/asm_disk1,/dev/asm_disk2 07-29-2019 11:41:49 UTC : : CLUSTER_TYPE env variable is set to STANDALONE, will not process GIMR DEVICE list as default Diskgroup is set to DATA. GIMR DEVICE List will be processed when CLUSTER_TYPE is set to DOMAIN for DSC 07-29-2019 11:41:49 UTC : : Nodes in the cluster racnode1 07-29-2019 11:41:49 UTC : : Setting Device permissions for RAC Install on racnode1 07-29-2019 11:41:49 UTC : : Preapring ASM Device list 07-29-2019 11:41:49 UTC : : Changing Disk permission and ownership 07-29-2019 11:41:49 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode1 07-29-2019 11:41:49 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode1 07-29-2019 11:41:49 UTC : : Populate Rac Env Vars on Remote Hosts 07-29-2019 11:41:49 UTC : : Changing Disk permission and ownership 07-29-2019 11:41:49 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode1 07-29-2019 11:41:49 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode1 07-29-2019 11:41:49 UTC : : Populate Rac Env Vars on Remote Hosts 07-29-2019 11:41:49 UTC : : Generating Reponsefile 07-29-2019 11:41:50 UTC : : Running cluvfy Checks 07-29-2019 11:41:50 UTC : : Performing Cluvfy Checks 07-29-2019 11:42:59 UTC : : Checking /tmp/cluvfy_check.txt if there is any failed check.

ERROR: PRVG-10467 : The default Oracle Inventory group could not be determined.

Verifying Physical Memory ...PASSED Verifying Available Physical Memory ...PASSED Verifying Swap Size ...PASSED Verifying Free Space: racnode1:/usr,racnode1:/var,racnode1:/etc,racnode1:/sbin,racnode1:/tmp,racnode1:/export/app/grid ...PASSED Verifying User Existence: grid ... Verifying Users With Same UID: 54332 ...PASSED Verifying User Existence: grid ...PASSED Verifying Group Existence: asmadmin ...PASSED Verifying Group Existence: dba ...PASSED Verifying Group Membership: dba ...PASSED Verifying Group Membership: asmadmin ...PASSED Verifying Run Level ...PASSED Verifying Hard Limit: maximum open file descriptors ...PASSED Verifying Soft Limit: maximum open file descriptors ...PASSED Verifying Hard Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum stack size ...PASSED Verifying Architecture ...PASSED Verifying OS Kernel Version ...PASSED Verifying OS Kernel Parameter: semmsl ...PASSED Verifying OS Kernel Parameter: semmns ...PASSED Verifying OS Kernel Parameter: semopm ...PASSED Verifying OS Kernel Parameter: semmni ...PASSED Verifying OS Kernel Parameter: shmmax ...PASSED Verifying OS Kernel Parameter: shmmni ...PASSED Verifying OS Kernel Parameter: shmall ...PASSED Verifying OS Kernel Parameter: file-max ...PASSED Verifying OS Kernel Parameter: aio-max-nr ...PASSED Verifying OS Kernel Parameter: panic_on_oops ...PASSED Verifying Package: binutils-2.23.52.0.1 ...PASSED Verifying Package: compat-libcap1-1.10 ...PASSED Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED Verifying Package: sysstat-10.1.5 ...PASSED Verifying Package: ksh ...PASSED Verifying Package: make-3.82 ...PASSED Verifying Package: glibc-2.17 (x86_64) ...PASSED Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED Verifying Package: libaio-0.3.109 (x86_64) ...PASSED Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED Verifying Package: nfs-utils-1.2.3-15 ...PASSED Verifying Package: smartmontools-6.2-4 ...PASSED Verifying Package: net-tools-2.0-0.17 ...PASSED Verifying Port Availability for component "Oracle Remote Method Invocation (ORMI)" ...PASSED Verifying Port Availability for component "Oracle Notification Service (ONS)" ...PASSED Verifying Port Availability for component "Oracle Cluster Synchronization Services (CSSD)" ...PASSED Verifying Port Availability for component "Oracle Notification Service (ONS) Enterprise Manager support" ...PASSED Verifying Port Availability for component "Oracle Database Listener" ...PASSED Verifying Users With Same UID: 0 ...PASSED Verifying Current Group ID ...PASSED Verifying Root user consistency ...PASSED Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying Node Connectivity ...PASSED Verifying Multicast check ...PASSED Verifying ASM Integrity ... Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying Node Connectivity ...PASSED Verifying ASM Integrity ...PASSED Verifying Device Checks for ASM ... Verifying ASM device sharedness check ... Verifying Package: cvuqdisk-1.0.10-1 ...PASSED Verifying Shared Storage Accessibility:/dev/asm_disk2,/dev/asm_disk1 ...WARNING (PRVG-1615) Verifying ASM device sharedness check ...WARNING (PRVG-1615) Verifying Access Control List check ...PASSED Verifying Device Checks for ASM ...WARNING (PRVG-1615) Verifying I/O scheduler ... Verifying Package: cvuqdisk-1.0.10-1 ...PASSED Verifying I/O scheduler ...PASSED Verifying Network Time Protocol (NTP) ... Verifying '/etc/ntp.conf' ...PASSED Verifying '/var/run/ntpd.pid' ...PASSED Verifying '/var/run/chronyd.pid' ...PASSED Verifying Network Time Protocol (NTP) ...FAILED Verifying Same core file name pattern ...PASSED Verifying User Mask ...PASSED Verifying User Not In Group "root": grid ...PASSED Verifying Time zone consistency ...PASSED Verifying VIP Subnet configuration check ...PASSED Verifying resolv.conf Integrity ... Verifying (Linux) resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying DNS/NIS name service ... Verifying Name Service Switch Configuration File Integrity ...PASSED Verifying DNS/NIS name service ...FAILED (PRVG-1101) Verifying Single Client Access Name (SCAN) ...PASSED Verifying Domain Sockets ...PASSED Verifying /boot mount ...PASSED Verifying Daemon "avahi-daemon" not configured and running ...PASSED Verifying Daemon "proxyt" not configured and running ...PASSED Verifying loopback network interface address ...PASSED Verifying Oracle base: /export/app/grid ... Verifying '/export/app/grid' ...PASSED Verifying Oracle base: /export/app/grid ...PASSED Verifying User Equivalence ...PASSED Verifying Network interface bonding status of private interconnect network interfaces ...PASSED Verifying File system mount options for path /var ...PASSED Verifying zeroconf check ...PASSED Verifying ASM Filter Driver configuration ...PASSED

Pre-check for cluster services setup was unsuccessful on all the nodes.

Warnings were encountered during execution of CVU verification request "stage -pre crsinst".

Verifying Device Checks for ASM ...WARNING Verifying ASM device sharedness check ...WARNING Verifying Shared Storage Accessibility:/dev/asm_disk2,/dev/asm_disk1 ...WARNING PRVG-1615 : Virtual environment detected. Skipping shared storage check for disks "/dev/asm_disk2,/dev/asm_disk1".

Verifying Network Time Protocol (NTP) ...FAILED Verifying resolv.conf Integrity ...FAILED racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11". racnode1: Check for integrity of file "/etc/resolv.conf" failed

Verifying (Linux) resolv.conf Integrity ...FAILED racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11".

Verifying DNS/NIS name service ...FAILED PRVG-1101 : SCAN name "racnode-scan" failed to resolve

CVU operation performed: stage -pre crsinst Date: Jul 29, 2019 11:41:52 AM CVU home: /export/app/12.2.0/grid/ User: grid 07-29-2019 11:42:59 UTC : : CVU Checks are ignored as IGNORE_CVU_CHECKS set to true. It is recommended to set IGNORE_CVU_CHECKS to false and meet all the cvu checks requirement. RAC installation might fail, if there are failed cvu checks. 07-29-2019 11:42:59 UTC : : Running Grid Installation 07-29-2019 11:43:14 UTC : : Running root.sh 07-29-2019 11:43:14 UTC : : Nodes in the cluster racnode1 07-29-2019 11:43:14 UTC : : Running root.sh on racnode1 07-29-2019 11:43:15 UTC : : Running post root.sh steps 07-29-2019 11:43:15 UTC : : Running post root.sh steps to setup Grid env 07-29-2019 11:43:21 UTC : : Checking Cluster Status 07-29-2019 11:43:21 UTC : : Nodes in the cluster 07-29-2019 11:43:21 UTC : : Removing /tmp/cluvfy_check.txt as cluster check has passed 07-29-2019 11:43:21 UTC : : Generating DB Responsefile Running DB creation 07-29-2019 11:43:21 UTC : : Running DB creation 07-29-2019 11:43:46 UTC : : Checking DB status 07-29-2019 11:43:47 UTC : : ORCLCDB is not up and running on racnode1 07-29-2019 11:43:47 UTC : : Error has occurred in Grid Setup, Please verify! PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=racnode1 TERM=xterm NODE_VIP=172.16.1.160 VIP_HOSTNAME=racnode1-vip PRIV_IP=192.168.17.150 PRIV_HOSTNAME=racnode1-priv PUBLIC_IP=172.16.1.150 PUBLIC_HOSTNAME=racnode1 SCAN_NAME=racnode-scan SCAN_IP=172.16.1.70 OP_TYPE=INSTALL DOMAIN=example.com ASM_DEVICE_LIST=/dev/asm_disk1,/dev/asm_disk2 ASM_DISCOVERY_DIR=/dev CMAN_HOSTNAME=racnode-cman1 CMAN_IP=172.16.1.15 COMMON_OS_PWD_FILE=common_os_pwdfile.enc PWD_KEY=pwd.key SETUP_LINUX_FILE=setupLinuxEnv.sh INSTALL_DIR=/opt/scripts GRID_BASE=/export/app/grid GRID_HOME=/export/app/12.2.0/grid INSTALL_FILE_1=linuxx64_12201_grid_home.zip GRID_INSTALL_RSP=grid.rsp GRID_SETUP_FILE=setupGrid.sh FIXUP_PREQ_FILE=fixupPreq.sh INSTALL_GRID_BINARIES_FILE=installGridBinaries.sh INSTALL_GRID_PATCH=applyGridPatch.sh INVENTORY=/export/app/oraInventory CONFIGGRID=configGrid.sh ADDNODE=AddNode.sh DELNODE=DelNode.sh ADDNODE_RSP=grid_addnode.rsp SETUPSSH=setupSSH.expect GRID_PATCH=p27383741_122010_Linux-x86-64.zip PATCH_NUMBER=27383741 SETUPDOCKERORACLEINIT=setupdockeroracleinit.sh DOCKERORACLEINIT=dockeroracleinit GRID_USER_HOME=/home/grid SETUPGRIDENV=setupGridEnv.sh DB_BASE=/export/app/oracle DB_HOME=/export/app/oracle/product/12.2.0/dbhome_1 INSTALL_FILE_2=linuxx64_12201_database.zip DB_INSTALL_RSP=db_inst.rsp DBCA_RSP=dbca.rsp DB_SETUP_FILE=setupDB.sh PWD_FILE=setPassword.sh RUN_FILE=runOracle.sh STOP_FILE=stopOracle.sh ENABLE_RAC_FILE=enableRAC.sh CHECK_DB_FILE=checkDBStatus.sh USER_SCRIPTS_FILE=runUserScripts.sh REMOTE_LISTENER_FILE=remoteListener.sh INSTALL_DB_BINARIES_FILE=installDBBinaries.sh RESET_OS_PASSWORD=resetOSPassword.sh MULTI_NODE_INSTALL=MultiNodeInstall.py FUNCTIONS=functions.sh COMMON_SCRIPTS=/common_scripts CHECK_SPACE_FILE=checkSpace.sh EXPECT=/usr/bin/expect BIN=/usr/sbin container=true INSTALL_SCRIPTS=/opt/scripts/install SCRIPT_DIR=/opt/scripts/startup GRID_PATH=/export/app/12.2.0/grid/bin:/export/app/12.2.0/grid/OPatch/:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin DB_PATH=/export/app/oracle/product/12.2.0/dbhome_1/bin:/export/app/oracle/product/12.2.0/dbhome_1/OPatch/:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin GRID_LD_LIBRARY_PATH=/export/app/12.2.0/grid/lib:/usr/lib:/lib DB_LD_LIBRARY_PATH=/export/app/oracle/product/12.2.0/dbhome_1/lib:/usr/lib:/lib HOME=/home/grid Failed to parse kernel command line, ignoring: No such file or directorysystemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN) Detected virtualization other. Detected architecture x86-64.

Welcome to Oracle Linux Server 7.6!

Set hostname to . Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory /usr/lib/systemd/system-generators/systemd-fstab-generator failed with error code 1. Binding to IPv6 address not available since kernel does not support IPv6. Binding to IPv6 address not available since kernel does not support IPv6. Cannot add dependency job for unit display-manager.service, ignoring: Unit not found. [ OK ] Started Dispatch Password Requests to Console Directory Watch. [ OK ] Reached target RPC Port Mapper. [ OK ] Created slice Root Slice. [ OK ] Created slice User and Session Slice. [ OK ] Listening on Journal Socket. [ OK ] Listening on Delayed Shutdown Socket. [ OK ] Started Forward Password Requests to Wall Directory Watch. [ OK ] Created slice System Slice. Starting Read and set NIS domainname from /etc/sysconfig/network... Starting Journal Service... [ OK ] Created slice system-getty.slice. [ OK ] Reached target Slices. Starting Remount Root and Kernel File Systems... Couldn't determine result for ConditionKernelCommandLine=|rd.modules-load for systemd-modules-load.service, assuming failed: No such file or directory Couldn't determine result for ConditionKernelCommandLine=|modules-load for systemd-modules-load.service, assuming failed: No such file or directory [ OK ] Reached target Swap. [ OK ] Listening on /dev/initctl Compatibility Named Pipe. [ OK ] Reached target Local Encrypted Volumes. [ OK ] Started Remount Root and Kernel File Systems. Starting Configure read-only root support... [ OK ] Reached target Local File Systems (Pre). [ OK ] Started Read and set NIS domainname from /etc/sysconfig/network. [ OK ] Started Configure read-only root support. [ OK ] Reached target Local File Systems. Couldn't determine result for ConditionKernelCommandLine=|autorelabel for rhel-autorelabel.service, assuming failed: No such file or directory Starting Preprocess NFS configuration... Starting Load/Save Random Seed... [ OK ] Started Journal Service. Starting Flush Journal to Persistent Storage... [ OK ] Started Preprocess NFS configuration. [ OK ] Started Load/Save Random Seed. [ OK ] Started Flush Journal to Persistent Storage. Starting Create Volatile Files and Directories... [ OK ] Started Create Volatile Files and Directories. Starting Update UTMP about System Boot/Shutdown... Mounting RPC Pipe File System... [FAILED] Failed to mount RPC Pipe File System. See 'systemctl status var-lib-nfs-rpc_pipefs.mount' for details. [DEPEND] Dependency failed for rpc_pipefs.target. [DEPEND] Dependency failed for RPC security service for NFS client and server. [ OK ] Started Update UTMP about System Boot/Shutdown. [ OK ] Reached target System Initialization. [ OK ] Listening on D-Bus System Message Bus Socket. [ OK ] Listening on RPCbind Server Activation Socket. Starting RPC bind service... [ OK ] Reached target Sockets. [ OK ] Started Flexible branding. [ OK ] Reached target Paths. [ OK ] Started Daily Cleanup of Temporary Directories. [ OK ] Reached target Timers. [ OK ] Reached target Basic System. Starting Login Service... [ OK ] Started Self Monitoring and Reporting Technology (SMART) Daemon. Starting LSB: Bring up/down networking... Starting Resets System Activity Logs... [ OK ] Started D-Bus System Message Bus. Starting GSSAPI Proxy Daemon... [ OK ] Started RPC bind service. [ OK ] Started Resets System Activity Logs. Starting Cleanup of Temporary Directories... [ OK ] Started Login Service. [ OK ] Started Cleanup of Temporary Directories. [ OK ] Started GSSAPI Proxy Daemon. [ OK ] Reached target NFS client services. [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. Starting Permit User Sessions... [ OK ] Started Permit User Sessions. [ OK ] Started Command Scheduler. [ OK ] Started LSB: Bring up/down networking. [ OK ] Reached target Network. Starting OpenSSH server daemon... Starting /etc/rc.d/rc.local Compatibility... [ OK ] Reached target Network is Online. Starting Notify NFS peers of a restart... [ OK ] Started Notify NFS peers of a restart. [ OK ] Started /etc/rc.d/rc.local Compatibility. [ OK ] Started Console Getty. [ OK ] Reached target Login Prompts. 08-12-2019 03:45:00 UTC : : Process id of the program : 08-12-2019 03:45:00 UTC : : ################################################# 08-12-2019 03:45:00 UTC : : Starting Grid Installation
08-12-2019 03:45:00 UTC : : ################################################# 08-12-2019 03:45:00 UTC : : Pre-Grid Setup steps are in process 08-12-2019 03:45:00 UTC : : Process id of the program : 08-12-2019 03:45:00 UTC : : Disable failed service var-lib-nfs-rpc_pipefs.mount Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directoryFailed to parse kernel command line, ignoring: No such file or directory

08-12-2019 03:45:00 UTC : : Resetting Failed Services 08-12-2019 03:45:00 UTC : : Sleeping for 60 seconds [ OK ] Started OpenSSH server daemon. [ OK ] Reached target Multi-User System. [ OK ] Reached target Graphical Interface. Starting Update UTMP about System Runlevel Changes...

Oracle Linux Server 7.6 Kernel 4.1.12-124.25.1.el7uek.x86_64 on an x86_64

racnode1 login: 08-12-2019 03:46:00 UTC : : Systemctl state is running! 08-12-2019 03:46:00 UTC : : Setting correct permissions for /bin/ping 08-12-2019 03:46:00 UTC : : Public IP is set to 172.16.1.150 08-12-2019 03:46:00 UTC : : RAC Node PUBLIC Hostname is set to racnode1 08-12-2019 03:46:00 UTC : : racnode1 already exists : 172.16.1.150 racnode1.example.coracnode1 192.168.17.150 racnode1-priv.example.com racnode1-priv 172.16.1.160 racnode1-vip.example.com racnode1-vip, no update required 08-12-2019 03:46:00 UTC : : racnode1-priv already exists : 192.168.17.150 racnode1-priv.example.com racnode1-priv, no update required 08-12-2019 03:46:00 UTC : : racnode1-vip already exists : 172.16.1.160 racnode1-vip.example.com racnode1-vip, no update required 08-12-2019 03:46:00 UTC : : racnode-scan already exists : 172.16.1.70 racnode-scan.example.com racnode-scan, no update required 08-12-2019 03:46:00 UTC : : racnode-cman1 already exists : 172.16.1.15 racnode-cman1.example.com racnode-cman1, no update required 08-12-2019 03:46:00 UTC : : Preapring Device list 08-12-2019 03:46:00 UTC : : Changing Disk permission and ownership /dev/asm_disk1 08-12-2019 03:46:00 UTC : : Changing Disk permission and ownership /dev/asm_disk2 08-12-2019 03:46:00 UTC : : ##################################################################### 08-12-2019 03:46:00 UTC : : RAC setup will begin in 2 minutes
08-12-2019 03:46:00 UTC : : #################################################################### 08-12-2019 03:46:02 UTC : : ################################################### 08-12-2019 03:46:02 UTC : : Pre-Grid Setup steps completed 08-12-2019 03:46:02 UTC : : ################################################### 08-12-2019 03:46:02 UTC : : Checking if grid is already configured 08-12-2019 03:46:02 UTC : : Process id of the program : 08-12-2019 03:46:02 UTC : : Public IP is set to 172.16.1.150 08-12-2019 03:46:02 UTC : : RAC Node PUBLIC Hostname is set to racnode1 08-12-2019 03:46:02 UTC : : Domain is defined to example.com 08-12-2019 03:46:02 UTC : : Default setting of AUTO GNS VIP set to false. If you want to use AUTO GNS VIP, please pass DHCP_CONF as an env parameter set to true 08-12-2019 03:46:02 UTC : : RAC VIP set to 172.16.1.160 08-12-2019 03:46:02 UTC : : RAC Node VIP hostname is set to racnode1-vip 08-12-2019 03:46:02 UTC : : SCAN_NAME name is racnode-scan 08-12-2019 03:46:02 UTC : : SCAN PORT is set to empty string. Setting it to 1521 port. 08-12-2019 03:46:22 UTC : : 172.16.1.70 08-12-2019 03:46:22 UTC : : SCAN Name resolving to IP. Check Passed! 08-12-2019 03:46:22 UTC : : SCAN_IP name is 172.16.1.70 08-12-2019 03:46:22 UTC : : RAC Node PRIV IP is set to 192.168.17.150 08-12-2019 03:46:22 UTC : : RAC Node private hostname is set to racnode1-priv 08-12-2019 03:46:22 UTC : : CMAN_HOSTNAME name is racnode-cman1 08-12-2019 03:46:22 UTC : : CMAN_IP name is 172.16.1.15 08-12-2019 03:46:22 UTC : : Cluster Name is not defined 08-12-2019 03:46:22 UTC : : Cluster name is set to 'racnode-c' 08-12-2019 03:46:22 UTC : : Password file generated 08-12-2019 03:46:22 UTC : : Common OS Password string is set for Grid user 08-12-2019 03:46:22 UTC : : Common OS Password string is set for Oracle user 08-12-2019 03:46:22 UTC : : Common OS Password string is set for Oracle Database 08-12-2019 03:46:22 UTC : : Setting CONFIGURE_GNS to false 08-12-2019 03:46:22 UTC : : GRID_RESPONSE_FILE env variable set to empty. configGrid.sh will use standard cluster responsefile 08-12-2019 03:46:22 UTC : : Location for User script SCRIPT_ROOT set to /common_scripts 08-12-2019 03:46:22 UTC : : IGNORE_CVU_CHECKS is set to true 08-12-2019 03:46:22 UTC : : Oracle SID is set to ORCLCDB 08-12-2019 03:46:22 UTC : : Oracle PDB name is set to ORCLPDB 08-12-2019 03:46:22 UTC : : Check passed for network card eth1 for public IP 172.16.1.150 08-12-2019 03:46:22 UTC : : Public Netmask : 255.255.255.0 08-12-2019 03:46:22 UTC : : Check passed for network card eth0 for private IP 192.168.17.150 08-12-2019 03:46:22 UTC : : Building NETWORK_STRING to set networkInterfaceList in Grid Response File 08-12-2019 03:46:22 UTC : : Network InterfaceList set to eth1:172.16.1.0:1,eth0:192.168.17.0:5 08-12-2019 03:46:22 UTC : : Setting random password for grid user 08-12-2019 03:46:23 UTC : : Setting random password for oracle user 08-12-2019 03:46:23 UTC : : Calling setupSSH function 08-12-2019 03:46:23 UTC : : SSh will be setup among racnode1 nodes 08-12-2019 03:46:23 UTC : : Running SSH setup for grid user between nodes racnode1 08-12-2019 03:46:59 UTC : : Running SSH setup for oracle user between nodes racnode1 08-12-2019 03:47:05 UTC : : SSH check fine for the racnode1 08-12-2019 03:47:05 UTC : : SSH check fine for the oracle@racnode1 08-12-2019 03:47:05 UTC : : Preapring Device list 08-12-2019 03:47:05 UTC : : Changing Disk permission and ownership 08-12-2019 03:47:05 UTC : : Changing Disk permission and ownership 08-12-2019 03:47:05 UTC : : ASM Disk size : 0 08-12-2019 03:47:05 UTC : : ASM Device list will be with failure groups /dev/asm_disk1,,/dev/asm_disk2, 08-12-2019 03:47:05 UTC : : ASM Device list will be groups /dev/asm_disk1,/dev/asm_disk2 08-12-2019 03:47:05 UTC : : CLUSTER_TYPE env variable is set to STANDALONE, will not process GIMR DEVICE list as default Diskgroup is set to DATA. GIMR DEVICE List will be processed when CLUSTER_TYPE is set to DOMAIN for DSC 08-12-2019 03:47:05 UTC : : Nodes in the cluster racnode1 08-12-2019 03:47:05 UTC : : Setting Device permissions for RAC Install on racnode1 08-12-2019 03:47:05 UTC : : Preapring ASM Device list 08-12-2019 03:47:05 UTC : : Changing Disk permission and ownership 08-12-2019 03:47:05 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode1 08-12-2019 03:47:05 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode1 08-12-2019 03:47:05 UTC : : Populate Rac Env Vars on Remote Hosts 08-12-2019 03:47:05 UTC : : Changing Disk permission and ownership 08-12-2019 03:47:05 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode1 08-12-2019 03:47:05 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode1 08-12-2019 03:47:05 UTC : : Populate Rac Env Vars on Remote Hosts 08-12-2019 03:47:05 UTC : : Generating Reponsefile 08-12-2019 03:47:06 UTC : : Running cluvfy Checks 08-12-2019 03:47:06 UTC : : Performing Cluvfy Checks 08-12-2019 03:48:21 UTC : : Checking /tmp/cluvfy_check.txt if there is any failed check.

Verifying Physical Memory ...PASSED Verifying Available Physical Memory ...PASSED Verifying Swap Size ...PASSED Verifying Free Space: racnode1:/usr,racnode1:/var,racnode1:/etc,racnode1:/sbin,racnode1:/tmp,racnode1:/export/app/grid ...PASSED Verifying User Existence: grid ... Verifying Users With Same UID: 54332 ...PASSED Verifying User Existence: grid ...PASSED Verifying Group Existence: asmadmin ...PASSED Verifying Group Existence: dba ...PASSED Verifying Group Existence: oinstall ...PASSED Verifying Group Membership: dba ...PASSED Verifying Group Membership: asmadmin ...PASSED Verifying Group Membership: oinstall(Primary) ...PASSED Verifying Run Level ...PASSED Verifying Hard Limit: maximum open file descriptors ...PASSED Verifying Soft Limit: maximum open file descriptors ...PASSED Verifying Hard Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum stack size ...PASSED Verifying Architecture ...PASSED Verifying OS Kernel Version ...PASSED Verifying OS Kernel Parameter: semmsl ...PASSED Verifying OS Kernel Parameter: semmns ...PASSED Verifying OS Kernel Parameter: semopm ...PASSED Verifying OS Kernel Parameter: semmni ...PASSED Verifying OS Kernel Parameter: shmmax ...PASSED Verifying OS Kernel Parameter: shmmni ...PASSED Verifying OS Kernel Parameter: shmall ...PASSED Verifying OS Kernel Parameter: file-max ...PASSED Verifying OS Kernel Parameter: aio-max-nr ...PASSED Verifying OS Kernel Parameter: panic_on_oops ...PASSED Verifying Package: binutils-2.23.52.0.1 ...PASSED Verifying Package: compat-libcap1-1.10 ...PASSED Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED Verifying Package: sysstat-10.1.5 ...PASSED Verifying Package: ksh ...PASSED Verifying Package: make-3.82 ...PASSED Verifying Package: glibc-2.17 (x86_64) ...PASSED Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED Verifying Package: libaio-0.3.109 (x86_64) ...PASSED Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED Verifying Package: nfs-utils-1.2.3-15 ...PASSED Verifying Package: smartmontools-6.2-4 ...PASSED Verifying Package: net-tools-2.0-0.17 ...PASSED Verifying Port Availability for component "Oracle Remote Method Invocation (ORMI)" ...PASSED Verifying Port Availability for component "Oracle Notification Service (ONS)" ...PASSED Verifying Port Availability for component "Oracle Cluster Synchronization Services (CSSD)" ...PASSED Verifying Port Availability for component "Oracle Notification Service (ONS) Enterprise Manager support" ...PASSED Verifying Port Availability for component "Oracle Database Listener" ...PASSED Verifying Users With Same UID: 0 ...PASSED Verifying Current Group ID ...PASSED Verifying Root user consistency ...PASSED Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying Node Connectivity ...PASSED Verifying Multicast check ...PASSED Verifying ASM Integrity ... Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying Node Connectivity ...PASSED Verifying ASM Integrity ...PASSED Verifying Device Checks for ASM ... Verifying ASM device sharedness check ... Verifying Package: cvuqdisk-1.0.10-1 ...PASSED Verifying Shared Storage Accessibility:/dev/asm_disk2,/dev/asm_disk1 ...WARNING (PRVG-1615) Verifying ASM device sharedness check ...WARNING (PRVG-1615) Verifying Access Control List check ...PASSED Verifying Device Checks for ASM ...WARNING (PRVG-1615) Verifying I/O scheduler ... Verifying Package: cvuqdisk-1.0.10-1 ...PASSED Verifying I/O scheduler ...PASSED Verifying Network Time Protocol (NTP) ... Verifying '/etc/ntp.conf' ...PASSED Verifying '/var/run/ntpd.pid' ...PASSED Verifying '/var/run/chronyd.pid' ...PASSED Verifying Network Time Protocol (NTP) ...FAILED Verifying Same core file name pattern ...PASSED Verifying User Mask ...PASSED Verifying User Not In Group "root": grid ...PASSED Verifying Time zone consistency ...PASSED Verifying VIP Subnet configuration check ...PASSED Verifying resolv.conf Integrity ... Verifying (Linux) resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying DNS/NIS name service ... Verifying Name Service Switch Configuration File Integrity ...PASSED Verifying DNS/NIS name service ...FAILED (PRVG-1101) Verifying Single Client Access Name (SCAN) ...PASSED Verifying Domain Sockets ...PASSED Verifying /boot mount ...PASSED Verifying Daemon "avahi-daemon" not configured and running ...PASSED Verifying Daemon "proxyt" not configured and running ...PASSED Verifying loopback network interface address ...PASSED Verifying Oracle base: /export/app/grid ... Verifying '/export/app/grid' ...PASSED Verifying Oracle base: /export/app/grid ...PASSED Verifying User Equivalence ...PASSED Verifying Network interface bonding status of private interconnect network interfaces ...PASSED Verifying File system mount options for path /var ...PASSED Verifying zeroconf check ...PASSED Verifying ASM Filter Driver configuration ...PASSED

Pre-check for cluster services setup was unsuccessful on all the nodes.

Warnings were encountered during execution of CVU verification request "stage -pre crsinst".

Verifying Device Checks for ASM ...WARNING Verifying ASM device sharedness check ...WARNING Verifying Shared Storage Accessibility:/dev/asm_disk2,/dev/asm_disk1 ...WARNING PRVG-1615 : Virtual environment detected. Skipping shared storage check for disks "/dev/asm_disk2,/dev/asm_disk1".

Verifying Network Time Protocol (NTP) ...FAILED Verifying resolv.conf Integrity ...FAILED racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11". racnode1: Check for integrity of file "/etc/resolv.conf" failed

Verifying (Linux) resolv.conf Integrity ...FAILED racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11".

Verifying DNS/NIS name service ...FAILED PRVG-1101 : SCAN name "racnode-scan" failed to resolve

CVU operation performed: stage -pre crsinst Date: Aug 12, 2019 3:47:13 AM CVU home: /export/app/12.2.0/grid/ User: grid 08-12-2019 03:48:21 UTC : : CVU Checks are ignored as IGNORE_CVU_CHECKS set to true. It is recommended to set IGNORE_CVU_CHECKS to false and meet all the cvu checks requirement. RAC installation might fail, if there are failed cvu checks. 08-12-2019 03:48:21 UTC : : Running Grid Installation 08-12-2019 03:48:41 UTC : : Running root.sh 08-12-2019 03:48:41 UTC : : Nodes in the cluster racnode1 08-12-2019 03:48:41 UTC : : Running root.sh on racnode1 08-12-2019 03:48:41 UTC : : Running post root.sh steps 08-12-2019 03:48:41 UTC : : Running post root.sh steps to setup Grid env 08-12-2019 03:48:47 UTC : : Checking Cluster Status 08-12-2019 03:48:47 UTC : : Nodes in the cluster 08-12-2019 03:48:47 UTC : : Removing /tmp/cluvfy_check.txt as cluster check has passed 08-12-2019 03:48:47 UTC : : Generating DB Responsefile Running DB creation 08-12-2019 03:48:47 UTC : : Running DB creation 08-12-2019 03:49:02 UTC : : Checking DB status 08-12-2019 03:49:03 UTC : : ORCLCDB is not up and running on racnode1 08-12-2019 03:49:03 UTC : : Error has occurred in Grid Setup, Please verify! 08-14-2019 19:16:50 UTC : : Process id of the program : 08-14-2019 19:16:50 UTC : : ################################################# 08-14-2019 19:16:50 UTC : : Starting Grid Installation
08-14-2019 19:16:50 UTC : : ################################################# 08-14-2019 19:16:50 UTC : : Pre-Grid Setup steps are in process 08-14-2019 19:16:50 UTC : : Process id of the program : 08-14-2019 19:16:50 UTC : : Sleeping for 60 seconds 08-14-2019 19:17:50 UTC : : Systemctl state is running! 08-14-2019 19:17:50 UTC : : Setting correct permissions for /bin/ping 08-14-2019 19:17:50 UTC : : Public IP is set to 172.16.1.150 08-14-2019 19:17:50 UTC : : RAC Node PUBLIC Hostname is set to racnode1 08-14-2019 19:17:50 UTC : : racnode1 already exists : 172.16.1.150 racnode1.example.coracnode1 192.168.17.150 racnode1-priv.example.com racnode1-priv 172.16.1.160 racnode1-vip.example.com racnode1-vip, no update required 08-14-2019 19:17:50 UTC : : racnode1-priv already exists : 192.168.17.150 racnode1-priv.example.com racnode1-priv, no update required 08-14-2019 19:17:50 UTC : : racnode1-vip already exists : 172.16.1.160 racnode1-vip.example.com racnode1-vip, no update required 08-14-2019 19:17:50 UTC : : racnode-scan already exists : 172.16.1.70 racnode-scan.example.com racnode-scan, no update required 08-14-2019 19:17:50 UTC : : racnode-cman1 already exists : 172.16.1.15 racnode-cman1.example.com racnode-cman1, no update required 08-14-2019 19:17:50 UTC : : Preapring Device list 08-14-2019 19:17:50 UTC : : Changing Disk permission and ownership /dev/asm_disk1 08-14-2019 19:17:50 UTC : : Changing Disk permission and ownership /dev/asm_disk2 08-14-2019 19:17:50 UTC : : ##################################################################### 08-14-2019 19:17:50 UTC : : RAC setup will begin in 2 minutes
08-14-2019 19:17:50 UTC : : #################################################################### 08-14-2019 19:17:52 UTC : : ################################################### 08-14-2019 19:17:52 UTC : : Pre-Grid Setup steps completed 08-14-2019 19:17:52 UTC : : ################################################### 08-14-2019 19:17:52 UTC : : Checking if grid is already configured 08-14-2019 19:17:52 UTC : : Process id of the program : 08-14-2019 19:17:52 UTC : : Public IP is set to 172.16.1.150 08-14-2019 19:17:52 UTC : : RAC Node PUBLIC Hostname is set to racnode1 08-14-2019 19:17:52 UTC : : Domain is defined to example.com 08-14-2019 19:17:52 UTC : : Default setting of AUTO GNS VIP set to false. If you want to use AUTO GNS VIP, please pass DHCP_CONF as an env parameter set to true 08-14-2019 19:17:52 UTC : : RAC VIP set to 172.16.1.160 08-14-2019 19:17:52 UTC : : RAC Node VIP hostname is set to racnode1-vip 08-14-2019 19:17:52 UTC : : SCAN_NAME name is racnode-scan 08-14-2019 19:17:52 UTC : : SCAN PORT is set to empty string. Setting it to 1521 port. 08-14-2019 19:18:12 UTC : : 172.16.1.70 08-14-2019 19:18:12 UTC : : SCAN Name resolving to IP. Check Passed! 08-14-2019 19:18:12 UTC : : SCAN_IP name is 172.16.1.70 08-14-2019 19:18:12 UTC : : RAC Node PRIV IP is set to 192.168.17.150 08-14-2019 19:18:12 UTC : : RAC Node private hostname is set to racnode1-priv 08-14-2019 19:18:12 UTC : : CMAN_HOSTNAME name is racnode-cman1 08-14-2019 19:18:12 UTC : : CMAN_IP name is 172.16.1.15 08-14-2019 19:18:12 UTC : : Cluster Name is not defined 08-14-2019 19:18:12 UTC : : Cluster name is set to 'racnode-c' 08-14-2019 19:18:12 UTC : : Password file generated 08-14-2019 19:18:12 UTC : : Common OS Password string is set for Grid user 08-14-2019 19:18:12 UTC : : Common OS Password string is set for Oracle user 08-14-2019 19:18:12 UTC : : Common OS Password string is set for Oracle Database 08-14-2019 19:18:12 UTC : : Setting CONFIGURE_GNS to false 08-14-2019 19:18:12 UTC : : GRID_RESPONSE_FILE env variable set to empty. configGrid.sh will use standard cluster responsefile 08-14-2019 19:18:12 UTC : : Location for User script SCRIPT_ROOT set to /common_scripts 08-14-2019 19:18:12 UTC : : IGNORE_CVU_CHECKS is set to true 08-14-2019 19:18:12 UTC : : Oracle SID is set to ORCLCDB 08-14-2019 19:18:12 UTC : : Oracle PDB name is set to ORCLPDB 08-14-2019 19:18:12 UTC : : Check passed for network card eth1 for public IP 172.16.1.150 08-14-2019 19:18:12 UTC : : Public Netmask : 255.255.255.0 08-14-2019 19:18:12 UTC : : Check passed for network card eth0 for private IP 192.168.17.150 08-14-2019 19:18:12 UTC : : Building NETWORK_STRING to set networkInterfaceList in Grid Response File 08-14-2019 19:18:12 UTC : : Network InterfaceList set to eth1:172.16.1.0:1,eth0:192.168.17.0:5 08-14-2019 19:18:12 UTC : : Setting random password for grid user 08-14-2019 19:18:12 UTC : : Setting random password for oracle user 08-14-2019 19:18:12 UTC : : Calling setupSSH function 08-14-2019 19:18:12 UTC : : SSh will be setup among racnode1 nodes 08-14-2019 19:18:12 UTC : : Running SSH setup for grid user between nodes racnode1 08-14-2019 19:18:48 UTC : : Running SSH setup for oracle user between nodes racnode1 08-14-2019 19:18:54 UTC : : SSH check fine for the racnode1 08-14-2019 19:18:54 UTC : : SSH check fine for the oracle@racnode1 08-14-2019 19:18:54 UTC : : Preapring Device list 08-14-2019 19:18:54 UTC : : Changing Disk permission and ownership 08-14-2019 19:18:54 UTC : : Changing Disk permission and ownership 08-14-2019 19:18:54 UTC : : ASM Disk size : 0 08-14-2019 19:18:54 UTC : : ASM Device list will be with failure groups /dev/asm_disk1,,/dev/asm_disk2, 08-14-2019 19:18:54 UTC : : ASM Device list will be groups /dev/asm_disk1,/dev/asm_disk2 08-14-2019 19:18:54 UTC : : CLUSTER_TYPE env variable is set to STANDALONE, will not process GIMR DEVICE list as default Diskgroup is set to DATA. GIMR DEVICE List will be processed when CLUSTER_TYPE is set to DOMAIN for DSC 08-14-2019 19:18:54 UTC : : Nodes in the cluster racnode1 08-14-2019 19:18:54 UTC : : Setting Device permissions for RAC Install on racnode1 08-14-2019 19:18:54 UTC : : Preapring ASM Device list 08-14-2019 19:18:54 UTC : : Changing Disk permission and ownership 08-14-2019 19:18:54 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode1 08-14-2019 19:18:54 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode1 08-14-2019 19:18:54 UTC : : Populate Rac Env Vars on Remote Hosts 08-14-2019 19:18:54 UTC : : Changing Disk permission and ownership 08-14-2019 19:18:54 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode1 08-14-2019 19:18:54 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode1 08-14-2019 19:18:54 UTC : : Populate Rac Env Vars on Remote Hosts 08-14-2019 19:18:55 UTC : : Generating Reponsefile 08-14-2019 19:18:55 UTC : : Running cluvfy Checks 08-14-2019 19:18:55 UTC : : Performing Cluvfy Checks 08-14-2019 19:20:06 UTC : : Checking /tmp/cluvfy_check.txt if there is any failed check.

Verifying Physical Memory ...PASSED Verifying Available Physical Memory ...PASSED Verifying Swap Size ...PASSED Verifying Free Space: racnode1:/usr,racnode1:/var,racnode1:/etc,racnode1:/sbin,racnode1:/tmp,racnode1:/export/app/grid ...PASSED Verifying User Existence: grid ... Verifying Users With Same UID: 54332 ...PASSED Verifying User Existence: grid ...PASSED Verifying Group Existence: asmadmin ...PASSED Verifying Group Existence: dba ...PASSED Verifying Group Existence: oinstall ...PASSED Verifying Group Membership: dba ...PASSED Verifying Group Membership: asmadmin ...PASSED Verifying Group Membership: oinstall(Primary) ...PASSED Verifying Run Level ...PASSED Verifying Hard Limit: maximum open file descriptors ...PASSED Verifying Soft Limit: maximum open file descriptors ...PASSED Verifying Hard Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum stack size ...PASSED Verifying Architecture ...PASSED Verifying OS Kernel Version ...PASSED Verifying OS Kernel Parameter: semmsl ...PASSED Verifying OS Kernel Parameter: semmns ...PASSED Verifying OS Kernel Parameter: semopm ...PASSED Verifying OS Kernel Parameter: semmni ...PASSED Verifying OS Kernel Parameter: shmmax ...PASSED Verifying OS Kernel Parameter: shmmni ...PASSED Verifying OS Kernel Parameter: shmall ...PASSED Verifying OS Kernel Parameter: file-max ...PASSED Verifying OS Kernel Parameter: aio-max-nr ...PASSED Verifying OS Kernel Parameter: panic_on_oops ...PASSED Verifying Package: binutils-2.23.52.0.1 ...PASSED Verifying Package: compat-libcap1-1.10 ...PASSED Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED Verifying Package: sysstat-10.1.5 ...PASSED Verifying Package: ksh ...PASSED Verifying Package: make-3.82 ...PASSED Verifying Package: glibc-2.17 (x86_64) ...PASSED Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED Verifying Package: libaio-0.3.109 (x86_64) ...PASSED Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED Verifying Package: nfs-utils-1.2.3-15 ...PASSED Verifying Package: smartmontools-6.2-4 ...PASSED Verifying Package: net-tools-2.0-0.17 ...PASSED Verifying Port Availability for component "Oracle Remote Method Invocation (ORMI)" ...PASSED Verifying Port Availability for component "Oracle Notification Service (ONS)" ...PASSED Verifying Port Availability for component "Oracle Cluster Synchronization Services (CSSD)" ...PASSED Verifying Port Availability for component "Oracle Notification Service (ONS) Enterprise Manager support" ...PASSED Verifying Port Availability for component "Oracle Database Listener" ...PASSED Verifying Users With Same UID: 0 ...PASSED Verifying Current Group ID ...PASSED Verifying Root user consistency ...PASSED Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying Node Connectivity ...PASSED Verifying Multicast check ...PASSED Verifying ASM Integrity ... Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying Node Connectivity ...PASSED Verifying ASM Integrity ...PASSED Verifying Device Checks for ASM ... Verifying ASM device sharedness check ... Verifying Package: cvuqdisk-1.0.10-1 ...PASSED Verifying Shared Storage Accessibility:/dev/asm_disk2,/dev/asm_disk1 ...WARNING (PRVG-1615) Verifying ASM device sharedness check ...WARNING (PRVG-1615) Verifying Access Control List check ...PASSED Verifying Device Checks for ASM ...WARNING (PRVG-1615) Verifying I/O scheduler ... Verifying Package: cvuqdisk-1.0.10-1 ...PASSED Verifying I/O scheduler ...PASSED Verifying Network Time Protocol (NTP) ... Verifying '/etc/ntp.conf' ...PASSED Verifying '/var/run/ntpd.pid' ...PASSED Verifying '/var/run/chronyd.pid' ...PASSED Verifying Network Time Protocol (NTP) ...FAILED Verifying Same core file name pattern ...PASSED Verifying User Mask ...PASSED Verifying User Not In Group "root": grid ...PASSED Verifying Time zone consistency ...PASSED Verifying VIP Subnet configuration check ...PASSED Verifying resolv.conf Integrity ... Verifying (Linux) resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying DNS/NIS name service ... Verifying Name Service Switch Configuration File Integrity ...PASSED Verifying DNS/NIS name service ...FAILED (PRVG-1101) Verifying Single Client Access Name (SCAN) ...PASSED Verifying Domain Sockets ...PASSED Verifying /boot mount ...PASSED Verifying Daemon "avahi-daemon" not configured and running ...PASSED Verifying Daemon "proxyt" not configured and running ...PASSED Verifying loopback network interface address ...PASSED Verifying Oracle base: /export/app/grid ... Verifying '/export/app/grid' ...PASSED Verifying Oracle base: /export/app/grid ...PASSED Verifying User Equivalence ...PASSED Verifying Network interface bonding status of private interconnect network interfaces ...PASSED Verifying File system mount options for path /var ...PASSED Verifying zeroconf check ...PASSED Verifying ASM Filter Driver configuration ...PASSED

Pre-check for cluster services setup was unsuccessful on all the nodes.

Warnings were encountered during execution of CVU verification request "stage -pre crsinst".

Verifying Device Checks for ASM ...WARNING Verifying ASM device sharedness check ...WARNING Verifying Shared Storage Accessibility:/dev/asm_disk2,/dev/asm_disk1 ...WARNING PRVG-1615 : Virtual environment detected. Skipping shared storage check for disks "/dev/asm_disk2,/dev/asm_disk1".

Verifying Network Time Protocol (NTP) ...FAILED Verifying resolv.conf Integrity ...FAILED racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11". racnode1: Check for integrity of file "/etc/resolv.conf" failed

Verifying (Linux) resolv.conf Integrity ...FAILED racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11".

Verifying DNS/NIS name service ...FAILED PRVG-1101 : SCAN name "racnode-scan" failed to resolve

CVU operation performed: stage -pre crsinst Date: Aug 14, 2019 7:19:02 PM CVU home: /export/app/12.2.0/grid/ User: grid 08-14-2019 19:20:06 UTC : : CVU Checks are ignored as IGNORE_CVU_CHECKS set to true. It is recommended to set IGNORE_CVU_CHECKS to false and meet all the cvu checks requirement. RAC installation might fail, if there are failed cvu checks. 08-14-2019 19:20:06 UTC : : Running Grid Installation 08-14-2019 19:20:24 UTC : : Running root.sh 08-14-2019 19:20:24 UTC : : Nodes in the cluster racnode1 08-14-2019 19:20:24 UTC : : Running root.sh on racnode1 08-14-2019 19:20:24 UTC : : Running post root.sh steps 08-14-2019 19:20:24 UTC : : Running post root.sh steps to setup Grid env 08-14-2019 19:20:30 UTC : : Checking Cluster Status 08-14-2019 19:20:30 UTC : : Nodes in the cluster 08-14-2019 19:20:30 UTC : : Removing /tmp/cluvfy_check.txt as cluster check has passed 08-14-2019 19:20:30 UTC : : Generating DB Responsefile Running DB creation 08-14-2019 19:20:30 UTC : : Running DB creation 08-14-2019 19:20:42 UTC : : Checking DB status 08-14-2019 19:20:43 UTC : : ORCLCDB is not up and running on racnode1 08-14-2019 19:20:43 UTC : : Error has occurred in Grid Setup, Please verify! 08-15-2019 05:53:32 UTC : : Process id of the program : 08-15-2019 05:53:32 UTC : : ################################################# 08-15-2019 05:53:32 UTC : : Starting Grid Installation
08-15-2019 05:53:32 UTC : : ################################################# 08-15-2019 05:53:32 UTC : : Pre-Grid Setup steps are in process 08-15-2019 05:53:32 UTC : : Process id of the program : 08-15-2019 05:53:32 UTC : : Sleeping for 60 seconds 08-15-2019 05:54:32 UTC : : Systemctl state is running! 08-15-2019 05:54:32 UTC : : Setting correct permissions for /bin/ping 08-15-2019 05:54:32 UTC : : Public IP is set to 172.16.1.150 08-15-2019 05:54:32 UTC : : RAC Node PUBLIC Hostname is set to racnode1 08-15-2019 05:54:32 UTC : : racnode1 already exists : 172.16.1.150 racnode1.example.coracnode1 192.168.17.150 racnode1-priv.example.com racnode1-priv 172.16.1.160 racnode1-vip.example.com racnode1-vip, no update required 08-15-2019 05:54:32 UTC : : racnode1-priv already exists : 192.168.17.150 racnode1-priv.example.com racnode1-priv, no update required 08-15-2019 05:54:32 UTC : : racnode1-vip already exists : 172.16.1.160 racnode1-vip.example.com racnode1-vip, no update required 08-15-2019 05:54:32 UTC : : racnode-scan already exists : 172.16.1.70 racnode-scan.example.com racnode-scan, no update required 08-15-2019 05:54:32 UTC : : racnode-cman1 already exists : 172.16.1.15 racnode-cman1.example.com racnode-cman1, no update required 08-15-2019 05:54:32 UTC : : Preapring Device list 08-15-2019 05:54:32 UTC : : Changing Disk permission and ownership /dev/asm_disk1 08-15-2019 05:54:32 UTC : : Changing Disk permission and ownership /dev/asm_disk2 08-15-2019 05:54:32 UTC : : ##################################################################### 08-15-2019 05:54:32 UTC : : RAC setup will begin in 2 minutes
08-15-2019 05:54:32 UTC : : #################################################################### 08-15-2019 05:54:34 UTC : : ################################################### 08-15-2019 05:54:34 UTC : : Pre-Grid Setup steps completed 08-15-2019 05:54:34 UTC : : ################################################### 08-15-2019 05:54:34 UTC : : Checking if grid is already configured 08-15-2019 05:54:34 UTC : : Process id of the program : 08-15-2019 05:54:34 UTC : : Public IP is set to 172.16.1.150 08-15-2019 05:54:34 UTC : : RAC Node PUBLIC Hostname is set to racnode1 08-15-2019 05:54:34 UTC : : Domain is defined to example.com 08-15-2019 05:54:34 UTC : : Default setting of AUTO GNS VIP set to false. If you want to use AUTO GNS VIP, please pass DHCP_CONF as an env parameter set to true 08-15-2019 05:54:34 UTC : : RAC VIP set to 172.16.1.160 08-15-2019 05:54:34 UTC : : RAC Node VIP hostname is set to racnode1-vip 08-15-2019 05:54:34 UTC : : SCAN_NAME name is racnode-scan 08-15-2019 05:54:34 UTC : : SCAN PORT is set to empty string. Setting it to 1521 port. 08-15-2019 05:54:54 UTC : : 172.16.1.70 08-15-2019 05:54:54 UTC : : SCAN Name resolving to IP. Check Passed! 08-15-2019 05:54:54 UTC : : SCAN_IP name is 172.16.1.70 08-15-2019 05:54:54 UTC : : RAC Node PRIV IP is set to 192.168.17.150 08-15-2019 05:54:54 UTC : : RAC Node private hostname is set to racnode1-priv 08-15-2019 05:54:54 UTC : : CMAN_HOSTNAME name is racnode-cman1 08-15-2019 05:54:54 UTC : : CMAN_IP name is 172.16.1.15 08-15-2019 05:54:54 UTC : : Cluster Name is not defined 08-15-2019 05:54:54 UTC : : Cluster name is set to 'racnode-c' 08-15-2019 05:54:54 UTC : : Password file generated 08-15-2019 05:54:54 UTC : : Common OS Password string is set for Grid user 08-15-2019 05:54:54 UTC : : Common OS Password string is set for Oracle user 08-15-2019 05:54:54 UTC : : Common OS Password string is set for Oracle Database 08-15-2019 05:54:54 UTC : : Setting CONFIGURE_GNS to false 08-15-2019 05:54:54 UTC : : GRID_RESPONSE_FILE env variable set to empty. configGrid.sh will use standard cluster responsefile 08-15-2019 05:54:54 UTC : : Location for User script SCRIPT_ROOT set to /common_scripts 08-15-2019 05:54:54 UTC : : IGNORE_CVU_CHECKS is set to true 08-15-2019 05:54:54 UTC : : Oracle SID is set to ORCLCDB 08-15-2019 05:54:54 UTC : : Oracle PDB name is set to ORCLPDB 08-15-2019 05:54:54 UTC : : Check passed for network card eth1 for public IP 172.16.1.150 08-15-2019 05:54:54 UTC : : Public Netmask : 255.255.255.0 08-15-2019 05:54:54 UTC : : Check passed for network card eth0 for private IP 192.168.17.150 08-15-2019 05:54:54 UTC : : Building NETWORK_STRING to set networkInterfaceList in Grid Response File 08-15-2019 05:54:54 UTC : : Network InterfaceList set to eth1:172.16.1.0:1,eth0:192.168.17.0:5 08-15-2019 05:54:54 UTC : : Setting random password for grid user 08-15-2019 05:54:54 UTC : : Setting random password for oracle user 08-15-2019 05:54:54 UTC : : Calling setupSSH function 08-15-2019 05:54:54 UTC : : SSh will be setup among racnode1 nodes 08-15-2019 05:54:54 UTC : : Running SSH setup for grid user between nodes racnode1 08-15-2019 05:55:30 UTC : : Running SSH setup for oracle user between nodes racnode1 08-15-2019 05:55:36 UTC : : SSH check fine for the racnode1 08-15-2019 05:55:36 UTC : : SSH check fine for the oracle@racnode1 08-15-2019 05:55:36 UTC : : Preapring Device list 08-15-2019 05:55:36 UTC : : Changing Disk permission and ownership 08-15-2019 05:55:36 UTC : : Changing Disk permission and ownership 08-15-2019 05:55:36 UTC : : ASM Disk size : 0 08-15-2019 05:55:36 UTC : : ASM Device list will be with failure groups /dev/asm_disk1,,/dev/asm_disk2, 08-15-2019 05:55:36 UTC : : ASM Device list will be groups /dev/asm_disk1,/dev/asm_disk2 08-15-2019 05:55:36 UTC : : CLUSTER_TYPE env variable is set to STANDALONE, will not process GIMR DEVICE list as default Diskgroup is set to DATA. GIMR DEVICE List will be processed when CLUSTER_TYPE is set to DOMAIN for DSC 08-15-2019 05:55:36 UTC : : Nodes in the cluster racnode1 08-15-2019 05:55:36 UTC : : Setting Device permissions for RAC Install on racnode1 08-15-2019 05:55:36 UTC : : Preapring ASM Device list 08-15-2019 05:55:36 UTC : : Changing Disk permission and ownership 08-15-2019 05:55:36 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode1 08-15-2019 05:55:36 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode1 08-15-2019 05:55:36 UTC : : Populate Rac Env Vars on Remote Hosts 08-15-2019 05:55:36 UTC : : Changing Disk permission and ownership 08-15-2019 05:55:36 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode1 08-15-2019 05:55:37 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode1 08-15-2019 05:55:37 UTC : : Populate Rac Env Vars on Remote Hosts 08-15-2019 05:55:37 UTC : : Generating Reponsefile 08-15-2019 05:55:37 UTC : : Running cluvfy Checks 08-15-2019 05:55:37 UTC : : Performing Cluvfy Checks 08-15-2019 05:56:39 UTC : : Checking /tmp/cluvfy_check.txt if there is any failed check.

Verifying Physical Memory ...PASSED Verifying Available Physical Memory ...PASSED Verifying Swap Size ...PASSED Verifying Free Space: racnode1:/usr,racnode1:/var,racnode1:/etc,racnode1:/sbin,racnode1:/tmp,racnode1:/export/app/grid ...PASSED Verifying User Existence: grid ... Verifying Users With Same UID: 54332 ...PASSED Verifying User Existence: grid ...PASSED Verifying Group Existence: asmadmin ...PASSED Verifying Group Existence: dba ...PASSED Verifying Group Existence: oinstall ...PASSED Verifying Group Membership: dba ...PASSED Verifying Group Membership: asmadmin ...PASSED Verifying Group Membership: oinstall(Primary) ...PASSED Verifying Run Level ...PASSED Verifying Hard Limit: maximum open file descriptors ...PASSED Verifying Soft Limit: maximum open file descriptors ...PASSED Verifying Hard Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum stack size ...PASSED Verifying Architecture ...PASSED Verifying OS Kernel Version ...PASSED Verifying OS Kernel Parameter: semmsl ...PASSED Verifying OS Kernel Parameter: semmns ...PASSED Verifying OS Kernel Parameter: semopm ...PASSED Verifying OS Kernel Parameter: semmni ...PASSED Verifying OS Kernel Parameter: shmmax ...PASSED Verifying OS Kernel Parameter: shmmni ...PASSED Verifying OS Kernel Parameter: shmall ...PASSED Verifying OS Kernel Parameter: file-max ...PASSED Verifying OS Kernel Parameter: aio-max-nr ...PASSED Verifying OS Kernel Parameter: panic_on_oops ...PASSED Verifying Package: binutils-2.23.52.0.1 ...PASSED Verifying Package: compat-libcap1-1.10 ...PASSED Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED Verifying Package: sysstat-10.1.5 ...PASSED Verifying Package: ksh ...PASSED Verifying Package: make-3.82 ...PASSED Verifying Package: glibc-2.17 (x86_64) ...PASSED Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED Verifying Package: libaio-0.3.109 (x86_64) ...PASSED Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED Verifying Package: nfs-utils-1.2.3-15 ...PASSED Verifying Package: smartmontools-6.2-4 ...PASSED Verifying Package: net-tools-2.0-0.17 ...PASSED Verifying Port Availability for component "Oracle Remote Method Invocation (ORMI)" ...PASSED Verifying Port Availability for component "Oracle Notification Service (ONS)" ...PASSED Verifying Port Availability for component "Oracle Cluster Synchronization Services (CSSD)" ...PASSED Verifying Port Availability for component "Oracle Notification Service (ONS) Enterprise Manager support" ...PASSED Verifying Port Availability for component "Oracle Database Listener" ...PASSED Verifying Users With Same UID: 0 ...PASSED Verifying Current Group ID ...PASSED Verifying Root user consistency ...PASSED Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying Node Connectivity ...PASSED Verifying Multicast check ...PASSED Verifying ASM Integrity ... Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying Node Connectivity ...PASSED Verifying ASM Integrity ...PASSED Verifying Device Checks for ASM ... Verifying ASM device sharedness check ... Verifying Package: cvuqdisk-1.0.10-1 ...PASSED Verifying Shared Storage Accessibility:/dev/asm_disk2,/dev/asm_disk1 ...WARNING (PRVG-1615) Verifying ASM device sharedness check ...WARNING (PRVG-1615) Verifying Access Control List check ...PASSED Verifying Device Checks for ASM ...WARNING (PRVG-1615) Verifying I/O scheduler ... Verifying Package: cvuqdisk-1.0.10-1 ...PASSED Verifying I/O scheduler ...PASSED Verifying Network Time Protocol (NTP) ... Verifying '/etc/ntp.conf' ...PASSED Verifying '/var/run/ntpd.pid' ...PASSED Verifying '/var/run/chronyd.pid' ...PASSED Verifying Network Time Protocol (NTP) ...FAILED Verifying Same core file name pattern ...PASSED Verifying User Mask ...PASSED Verifying User Not In Group "root": grid ...PASSED Verifying Time zone consistency ...PASSED Verifying VIP Subnet configuration check ...PASSED Verifying resolv.conf Integrity ... Verifying (Linux) resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying DNS/NIS name service ... Verifying Name Service Switch Configuration File Integrity ...PASSED Verifying DNS/NIS name service ...FAILED (PRVG-1101) Verifying Single Client Access Name (SCAN) ...PASSED Verifying Domain Sockets ...PASSED Verifying /boot mount ...PASSED Verifying Daemon "avahi-daemon" not configured and running ...PASSED Verifying Daemon "proxyt" not configured and running ...PASSED Verifying loopback network interface address ...PASSED Verifying Oracle base: /export/app/grid ... Verifying '/export/app/grid' ...PASSED Verifying Oracle base: /export/app/grid ...PASSED Verifying User Equivalence ...PASSED Verifying Network interface bonding status of private interconnect network interfaces ...PASSED Verifying File system mount options for path /var ...PASSED Verifying zeroconf check ...PASSED Verifying ASM Filter Driver configuration ...PASSED

Pre-check for cluster services setup was unsuccessful on all the nodes.

Warnings were encountered during execution of CVU verification request "stage -pre crsinst".

Verifying Device Checks for ASM ...WARNING Verifying ASM device sharedness check ...WARNING Verifying Shared Storage Accessibility:/dev/asm_disk2,/dev/asm_disk1 ...WARNING PRVG-1615 : Virtual environment detected. Skipping shared storage check for disks "/dev/asm_disk2,/dev/asm_disk1".

Verifying Network Time Protocol (NTP) ...FAILED Verifying resolv.conf Integrity ...FAILED racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11". racnode1: Check for integrity of file "/etc/resolv.conf" failed

Verifying (Linux) resolv.conf Integrity ...FAILED racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11".

Verifying DNS/NIS name service ...FAILED PRVG-1101 : SCAN name "racnode-scan" failed to resolve

CVU operation performed: stage -pre crsinst Date: Aug 15, 2019 5:55:38 AM CVU home: /export/app/12.2.0/grid/ User: grid 08-15-2019 05:56:39 UTC : : CVU Checks are ignored as IGNORE_CVU_CHECKS set to true. It is recommended to set IGNORE_CVU_CHECKS to false and meet all the cvu checks requirement. RAC installation might fail, if there are failed cvu checks. 08-15-2019 05:56:39 UTC : : Running Grid Installation 08-15-2019 05:56:50 UTC : : Running root.sh 08-15-2019 05:56:50 UTC : : Nodes in the cluster racnode1 08-15-2019 05:56:50 UTC : : Running root.sh on racnode1 08-15-2019 05:56:50 UTC : : Running post root.sh steps 08-15-2019 05:56:50 UTC : : Running post root.sh steps to setup Grid env 08-15-2019 05:56:56 UTC : : Checking Cluster Status 08-15-2019 05:56:56 UTC : : Nodes in the cluster 08-15-2019 05:56:56 UTC : : Removing /tmp/cluvfy_check.txt as cluster check has passed 08-15-2019 05:56:56 UTC : : Generating DB Responsefile Running DB creation 08-15-2019 05:56:56 UTC : : Running DB creation 08-15-2019 05:56:59 UTC : : Checking DB status 08-15-2019 05:56:59 UTC : : ORCLCDB is not up and running on racnode1 08-15-2019 05:56:59 UTC : : Error has occurred in Grid Setup, Please verify! 08-15-2019 06:50:23 UTC : : Process id of the program : 08-15-2019 06:50:23 UTC : : ################################################# 08-15-2019 06:50:23 UTC : : Starting Grid Installation
08-15-2019 06:50:23 UTC : : ################################################# 08-15-2019 06:50:23 UTC : : Pre-Grid Setup steps are in process 08-15-2019 06:50:23 UTC : : Process id of the program : 08-15-2019 06:50:23 UTC : : Sleeping for 60 seconds 08-15-2019 06:51:23 UTC : : Systemctl state is running! 08-15-2019 06:51:23 UTC : : Setting correct permissions for /bin/ping 08-15-2019 06:51:23 UTC : : Public IP is set to 172.16.1.150 08-15-2019 06:51:23 UTC : : RAC Node PUBLIC Hostname is set to racnode1 08-15-2019 06:51:23 UTC : : racnode1 already exists : 172.16.1.150 racnode1.example.coracnode1 192.168.17.150 racnode1-priv.example.com racnode1-priv 172.16.1.160 racnode1-vip.example.com racnode1-vip, no update required 08-15-2019 06:51:23 UTC : : racnode1-priv already exists : 192.168.17.150 racnode1-priv.example.com racnode1-priv, no update required 08-15-2019 06:51:23 UTC : : racnode1-vip already exists : 172.16.1.160 racnode1-vip.example.com racnode1-vip, no update required 08-15-2019 06:51:23 UTC : : racnode-scan already exists : 172.16.1.70 racnode-scan.example.com racnode-scan, no update required 08-15-2019 06:51:23 UTC : : racnode-cman1 already exists : 172.16.1.15 racnode-cman1.example.com racnode-cman1, no update required 08-15-2019 06:51:23 UTC : : Preapring Device list 08-15-2019 06:51:23 UTC : : Changing Disk permission and ownership /dev/asm_disk1 08-15-2019 06:51:23 UTC : : Changing Disk permission and ownership /dev/asm_disk2 08-15-2019 06:51:23 UTC : : ##################################################################### 08-15-2019 06:51:23 UTC : : RAC setup will begin in 2 minutes
08-15-2019 06:51:23 UTC : : #################################################################### 08-15-2019 06:51:25 UTC : : ################################################### 08-15-2019 06:51:25 UTC : : Pre-Grid Setup steps completed 08-15-2019 06:51:25 UTC : : ################################################### 08-15-2019 06:51:25 UTC : : Checking if grid is already configured 08-15-2019 06:51:25 UTC : : Process id of the program : 08-15-2019 06:51:25 UTC : : Public IP is set to 172.16.1.150 08-15-2019 06:51:25 UTC : : RAC Node PUBLIC Hostname is set to racnode1 08-15-2019 06:51:25 UTC : : Domain is defined to example.com 08-15-2019 06:51:25 UTC : : Default setting of AUTO GNS VIP set to false. If you want to use AUTO GNS VIP, please pass DHCP_CONF as an env parameter set to true 08-15-2019 06:51:25 UTC : : RAC VIP set to 172.16.1.160 08-15-2019 06:51:25 UTC : : RAC Node VIP hostname is set to racnode1-vip 08-15-2019 06:51:25 UTC : : SCAN_NAME name is racnode-scan 08-15-2019 06:51:25 UTC : : SCAN PORT is set to empty string. Setting it to 1521 port. 08-15-2019 06:51:45 UTC : : 172.16.1.70 08-15-2019 06:51:45 UTC : : SCAN Name resolving to IP. Check Passed! 08-15-2019 06:51:45 UTC : : SCAN_IP name is 172.16.1.70 08-15-2019 06:51:45 UTC : : RAC Node PRIV IP is set to 192.168.17.150 08-15-2019 06:51:45 UTC : : RAC Node private hostname is set to racnode1-priv 08-15-2019 06:51:45 UTC : : CMAN_HOSTNAME name is racnode-cman1 08-15-2019 06:51:45 UTC : : CMAN_IP name is 172.16.1.15 08-15-2019 06:51:45 UTC : : Cluster Name is not defined 08-15-2019 06:51:45 UTC : : Cluster name is set to 'racnode-c' 08-15-2019 06:51:45 UTC : : Password file generated 08-15-2019 06:51:45 UTC : : Common OS Password string is set for Grid user 08-15-2019 06:51:45 UTC : : Common OS Password string is set for Oracle user 08-15-2019 06:51:45 UTC : : Common OS Password string is set for Oracle Database 08-15-2019 06:51:45 UTC : : Setting CONFIGURE_GNS to false 08-15-2019 06:51:45 UTC : : GRID_RESPONSE_FILE env variable set to empty. configGrid.sh will use standard cluster responsefile 08-15-2019 06:51:45 UTC : : Location for User script SCRIPT_ROOT set to /common_scripts 08-15-2019 06:51:45 UTC : : IGNORE_CVU_CHECKS is set to true 08-15-2019 06:51:45 UTC : : Oracle SID is set to ORCLCDB 08-15-2019 06:51:45 UTC : : Oracle PDB name is set to ORCLPDB 08-15-2019 06:51:45 UTC : : Check passed for network card eth1 for public IP 172.16.1.150 08-15-2019 06:51:45 UTC : : Public Netmask : 255.255.255.0 08-15-2019 06:51:45 UTC : : Check passed for network card eth0 for private IP 192.168.17.150 08-15-2019 06:51:45 UTC : : Building NETWORK_STRING to set networkInterfaceList in Grid Response File 08-15-2019 06:51:45 UTC : : Network InterfaceList set to eth1:172.16.1.0:1,eth0:192.168.17.0:5 08-15-2019 06:51:45 UTC : : Setting random password for grid user 08-15-2019 06:51:45 UTC : : Setting random password for oracle user 08-15-2019 06:51:45 UTC : : Calling setupSSH function 08-15-2019 06:51:45 UTC : : SSh will be setup among racnode1 nodes 08-15-2019 06:51:45 UTC : : Running SSH setup for grid user between nodes racnode1 08-15-2019 06:52:21 UTC : : Running SSH setup for oracle user between nodes racnode1 08-15-2019 06:52:27 UTC : : SSH check fine for the racnode1 08-15-2019 06:52:27 UTC : : SSH check fine for the oracle@racnode1 08-15-2019 06:52:27 UTC : : Preapring Device list 08-15-2019 06:52:27 UTC : : Changing Disk permission and ownership 08-15-2019 06:52:27 UTC : : Changing Disk permission and ownership 08-15-2019 06:52:27 UTC : : ASM Disk size : 0 08-15-2019 06:52:27 UTC : : ASM Device list will be with failure groups /dev/asm_disk1,,/dev/asm_disk2, 08-15-2019 06:52:27 UTC : : ASM Device list will be groups /dev/asm_disk1,/dev/asm_disk2 08-15-2019 06:52:27 UTC : : CLUSTER_TYPE env variable is set to STANDALONE, will not process GIMR DEVICE list as default Diskgroup is set to DATA. GIMR DEVICE List will be processed when CLUSTER_TYPE is set to DOMAIN for DSC 08-15-2019 06:52:27 UTC : : Nodes in the cluster racnode1 08-15-2019 06:52:27 UTC : : Setting Device permissions for RAC Install on racnode1 08-15-2019 06:52:27 UTC : : Preapring ASM Device list 08-15-2019 06:52:27 UTC : : Changing Disk permission and ownership 08-15-2019 06:52:27 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode1 08-15-2019 06:52:27 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode1 08-15-2019 06:52:27 UTC : : Populate Rac Env Vars on Remote Hosts 08-15-2019 06:52:27 UTC : : Changing Disk permission and ownership 08-15-2019 06:52:27 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode1 08-15-2019 06:52:27 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode1 08-15-2019 06:52:28 UTC : : Populate Rac Env Vars on Remote Hosts 08-15-2019 06:52:28 UTC : : Generating Reponsefile 08-15-2019 06:52:28 UTC : : Running cluvfy Checks 08-15-2019 06:52:28 UTC : : Performing Cluvfy Checks 08-15-2019 06:53:30 UTC : : Checking /tmp/cluvfy_check.txt if there is any failed check.

Verifying Physical Memory ...PASSED Verifying Available Physical Memory ...PASSED Verifying Swap Size ...PASSED Verifying Free Space: racnode1:/usr,racnode1:/var,racnode1:/etc,racnode1:/sbin,racnode1:/tmp,racnode1:/export/app/grid ...PASSED Verifying User Existence: grid ... Verifying Users With Same UID: 54332 ...PASSED Verifying User Existence: grid ...PASSED Verifying Group Existence: asmadmin ...PASSED Verifying Group Existence: dba ...PASSED Verifying Group Existence: oinstall ...PASSED Verifying Group Membership: dba ...PASSED Verifying Group Membership: asmadmin ...PASSED Verifying Group Membership: oinstall(Primary) ...PASSED Verifying Run Level ...PASSED Verifying Hard Limit: maximum open file descriptors ...PASSED Verifying Soft Limit: maximum open file descriptors ...PASSED Verifying Hard Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum stack size ...PASSED Verifying Architecture ...PASSED Verifying OS Kernel Version ...PASSED Verifying OS Kernel Parameter: semmsl ...PASSED Verifying OS Kernel Parameter: semmns ...PASSED Verifying OS Kernel Parameter: semopm ...PASSED Verifying OS Kernel Parameter: semmni ...PASSED Verifying OS Kernel Parameter: shmmax ...PASSED Verifying OS Kernel Parameter: shmmni ...PASSED Verifying OS Kernel Parameter: shmall ...PASSED Verifying OS Kernel Parameter: file-max ...PASSED Verifying OS Kernel Parameter: aio-max-nr ...PASSED Verifying OS Kernel Parameter: panic_on_oops ...PASSED Verifying Package: binutils-2.23.52.0.1 ...PASSED Verifying Package: compat-libcap1-1.10 ...PASSED Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED Verifying Package: sysstat-10.1.5 ...PASSED Verifying Package: ksh ...PASSED Verifying Package: make-3.82 ...PASSED Verifying Package: glibc-2.17 (x86_64) ...PASSED Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED Verifying Package: libaio-0.3.109 (x86_64) ...PASSED Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED Verifying Package: nfs-utils-1.2.3-15 ...PASSED Verifying Package: smartmontools-6.2-4 ...PASSED Verifying Package: net-tools-2.0-0.17 ...PASSED Verifying Port Availability for component "Oracle Remote Method Invocation (ORMI)" ...PASSED Verifying Port Availability for component "Oracle Notification Service (ONS)" ...PASSED Verifying Port Availability for component "Oracle Cluster Synchronization Services (CSSD)" ...PASSED Verifying Port Availability for component "Oracle Notification Service (ONS) Enterprise Manager support" ...PASSED Verifying Port Availability for component "Oracle Database Listener" ...PASSED Verifying Users With Same UID: 0 ...PASSED Verifying Current Group ID ...PASSED Verifying Root user consistency ...PASSED Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying Node Connectivity ...PASSED Verifying Multicast check ...PASSED Verifying ASM Integrity ... Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying Node Connectivity ...PASSED Verifying ASM Integrity ...PASSED Verifying Device Checks for ASM ... Verifying ASM device sharedness check ... Verifying Package: cvuqdisk-1.0.10-1 ...PASSED Verifying Shared Storage Accessibility:/dev/asm_disk2,/dev/asm_disk1 ...WARNING (PRVG-1615) Verifying ASM device sharedness check ...WARNING (PRVG-1615) Verifying Access Control List check ...PASSED Verifying Device Checks for ASM ...WARNING (PRVG-1615) Verifying I/O scheduler ... Verifying Package: cvuqdisk-1.0.10-1 ...PASSED Verifying I/O scheduler ...PASSED Verifying Network Time Protocol (NTP) ... Verifying '/etc/ntp.conf' ...PASSED Verifying '/var/run/ntpd.pid' ...PASSED Verifying '/var/run/chronyd.pid' ...PASSED Verifying Network Time Protocol (NTP) ...FAILED Verifying Same core file name pattern ...PASSED Verifying User Mask ...PASSED Verifying User Not In Group "root": grid ...PASSED Verifying Time zone consistency ...PASSED Verifying VIP Subnet configuration check ...PASSED Verifying resolv.conf Integrity ... Verifying (Linux) resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying DNS/NIS name service ... Verifying Name Service Switch Configuration File Integrity ...PASSED Verifying DNS/NIS name service ...FAILED (PRVG-1101) Verifying Single Client Access Name (SCAN) ...PASSED Verifying Domain Sockets ...PASSED Verifying /boot mount ...PASSED Verifying Daemon "avahi-daemon" not configured and running ...PASSED Verifying Daemon "proxyt" not configured and running ...PASSED Verifying loopback network interface address ...PASSED Verifying Oracle base: /export/app/grid ... Verifying '/export/app/grid' ...PASSED Verifying Oracle base: /export/app/grid ...PASSED Verifying User Equivalence ...PASSED Verifying Network interface bonding status of private interconnect network interfaces ...PASSED Verifying File system mount options for path /var ...PASSED Verifying zeroconf check ...PASSED Verifying ASM Filter Driver configuration ...PASSED

Pre-check for cluster services setup was unsuccessful on all the nodes.

Warnings were encountered during execution of CVU verification request "stage -pre crsinst".

Verifying Device Checks for ASM ...WARNING Verifying ASM device sharedness check ...WARNING Verifying Shared Storage Accessibility:/dev/asm_disk2,/dev/asm_disk1 ...WARNING PRVG-1615 : Virtual environment detected. Skipping shared storage check for disks "/dev/asm_disk2,/dev/asm_disk1".

Verifying Network Time Protocol (NTP) ...FAILED Verifying resolv.conf Integrity ...FAILED racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11". racnode1: Check for integrity of file "/etc/resolv.conf" failed

Verifying (Linux) resolv.conf Integrity ...FAILED racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11".

Verifying DNS/NIS name service ...FAILED PRVG-1101 : SCAN name "racnode-scan" failed to resolve

CVU operation performed: stage -pre crsinst Date: Aug 15, 2019 6:52:29 AM CVU home: /export/app/12.2.0/grid/ User: grid 08-15-2019 06:53:30 UTC : : CVU Checks are ignored as IGNORE_CVU_CHECKS set to true. It is recommended to set IGNORE_CVU_CHECKS to false and meet all the cvu checks requirement. RAC installation might fail, if there are failed cvu checks. 08-15-2019 06:53:30 UTC : : Running Grid Installation 08-15-2019 06:53:44 UTC : : Running root.sh 08-15-2019 06:53:44 UTC : : Nodes in the cluster racnode1 08-15-2019 06:53:44 UTC : : Running root.sh on racnode1 Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory 08-15-2019 07:00:10 UTC : : Running post root.sh steps 08-15-2019 07:00:10 UTC : : Running post root.sh steps to setup Grid env 08-15-2019 07:00:34 UTC : : Checking Cluster Status 08-15-2019 07:00:34 UTC : : Nodes in the cluster 08-15-2019 07:00:34 UTC : : Removing /tmp/cluvfy_check.txt as cluster check has passed 08-15-2019 07:00:34 UTC : : Generating DB Responsefile Running DB creation 08-15-2019 07:00:34 UTC : : Running DB creation 08-15-2019 07:12:12 UTC : : Checking DB status 08-15-2019 07:12:14 UTC : : ################################################################# 08-15-2019 07:12:14 UTC : : Oracle Database ORCLCDB is up and running on racnode1
08-15-2019 07:12:14 UTC : : ################################################################# 08-15-2019 07:12:14 UTC : : Running User Script 08-15-2019 07:12:14 UTC : : Setting Remote Listener 08-15-2019 07:12:34 UTC : : 172.16.1.15 08-15-2019 07:12:34 UTC : : Executing script to set the remote listener 08-15-2019 07:12:36 UTC : : #################################### 08-15-2019 07:12:36 UTC : : ORACLE RAC DATABASE IS READY TO USE! 08-15-2019 07:12:36 UTC : : #################################### 08-15-2019 07:19:53 UTC : : Process id of the program : 08-15-2019 07:19:53 UTC : : ################################################# 08-15-2019 07:19:53 UTC : : Starting Grid Installation
08-15-2019 07:19:53 UTC : : ################################################# 08-15-2019 07:19:53 UTC : : Pre-Grid Setup steps are in process 08-15-2019 07:19:53 UTC : : Process id of the program : 08-15-2019 07:19:53 UTC : : Sleeping for 60 seconds 08-15-2019 07:20:53 UTC : : Systemctl state is running! 08-15-2019 07:20:53 UTC : : Setting correct permissions for /bin/ping 08-15-2019 07:20:53 UTC : : Public IP is set to 172.16.1.150 08-15-2019 07:20:53 UTC : : RAC Node PUBLIC Hostname is set to racnode1 08-15-2019 07:20:53 UTC : : racnode1 already exists : 172.16.1.150 racnode1.example.coracnode1 192.168.17.150 racnode1-priv.example.com racnode1-priv 172.16.1.160 racnode1-vip.example.com racnode1-vip, no update required 08-15-2019 07:20:53 UTC : : racnode1-priv already exists : 192.168.17.150 racnode1-priv.example.com racnode1-priv, no update required 08-15-2019 07:20:53 UTC : : racnode1-vip already exists : 172.16.1.160 racnode1-vip.example.com racnode1-vip, no update required 08-15-2019 07:20:53 UTC : : racnode-scan already exists : 172.16.1.70 racnode-scan.example.com racnode-scan, no update required 08-15-2019 07:20:53 UTC : : racnode-cman1 already exists : 172.16.1.15 racnode-cman1.example.com racnode-cman1, no update required 08-15-2019 07:20:53 UTC : : Preapring Device list 08-15-2019 07:20:53 UTC : : Changing Disk permission and ownership /dev/asm_disk1 08-15-2019 07:20:53 UTC : : Changing Disk permission and ownership /dev/asm_disk2 08-15-2019 07:20:53 UTC : : ##################################################################### 08-15-2019 07:20:53 UTC : : RAC setup will begin in 2 minutes
08-15-2019 07:20:53 UTC : : #################################################################### 08-15-2019 07:20:55 UTC : : ################################################### 08-15-2019 07:20:55 UTC : : Pre-Grid Setup steps completed 08-15-2019 07:20:55 UTC : : ################################################### 08-15-2019 07:20:55 UTC : : Checking if grid is already configured 08-15-2019 07:20:55 UTC : : Grid is installed on racnode1. runOracle.sh will start the Grid service 08-15-2019 07:20:55 UTC : : Setting up Grid Env for Grid Start 08-15-2019 07:20:55 UTC : : ########################################################################################## 08-15-2019 07:20:55 UTC : : Grid is already installed on this container! Grid will be started by default ohasd scripts 08-15-2019 07:20:55 UTC : : ############################################################################################

psaini79 commented 5 years ago

Ok. Grid and Oracle Database is up and running. It was a password issue.

All is yours. Please update SR with the findings: "Issue was not related to DNS but installation was failing of wrong password."

psaini79 commented 5 years ago

Let me know if you have a use case and would like to have more details. You can ask SR owner to connect with me, they can provide my email id.

psaini79 commented 5 years ago

Please close this thread if the issue is resolved.

babloo2642 commented 5 years ago

Yes, Grid and Oracle Database is up and running. Thank you so much for your help !!! Ok, I'll update the SR and will close the SR. I'll let you know if I need your help. Can I close this issue after few days ?

psaini79 commented 5 years ago

You can re-open anytime or create a new thread. Please close this.

babloo2642 commented 5 years ago

Ok got it, Thank you.

babloo2642 commented 5 years ago

@psaini79

I'm unable to login as sqlplus. Please find the details below.

[oracle@racnode1 ~]$ sqlplus system@\"racnode1:1521/ORCLCDB\"

SQL*Plus: Release 12.2.0.1.0 Production on Thu Aug 15 19:21:54 2019

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Enter password: ERROR: ORA-28000: the account is locked

Enter user-name: system Enter password: ERROR: ORA-12162: TNS:net service name is incorrectly specified

babloo2642 commented 5 years ago

@psaini79

Cluster check failed. Please find the details below.

[root@docker ~]# docker logs -f racnode2 PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=racnode2 TERM=xterm EXISTING_CLS_NODES=racnode1 NODE_VIP=172.16.1.161 VIP_HOSTNAME=racnode2-vip PRIV_IP=192.168.17.151 PRIV_HOSTNAME=racnode2-priv PUBLIC_IP=172.16.1.151 PUBLIC_HOSTNAME=racnode2 DOMAIN=example.com SCAN_NAME=racnode-scan SCAN_IP=172.16.1.70 ASM_DISCOVERY_DIR=/dev ASM_DEVICE_LIST=/dev/asm_disk1,/dev/asm_disk2 ORACLE_SID=ORCLCDB1 OP_TYPE=ADDNODE COMMON_OS_PWD_FILE=common_os_pwdfile.enc PWD_KEY=pwd.key SETUP_LINUX_FILE=setupLinuxEnv.sh INSTALL_DIR=/opt/scripts GRID_BASE=/export/app/grid GRID_HOME=/export/app/12.2.0/grid INSTALL_FILE_1=linuxx64_12201_grid_home.zip GRID_INSTALL_RSP=grid.rsp GRID_SETUP_FILE=setupGrid.sh FIXUP_PREQ_FILE=fixupPreq.sh INSTALL_GRID_BINARIES_FILE=installGridBinaries.sh INSTALL_GRID_PATCH=applyGridPatch.sh INVENTORY=/export/app/oraInventory CONFIGGRID=configGrid.sh ADDNODE=AddNode.sh DELNODE=DelNode.sh ADDNODE_RSP=grid_addnode.rsp SETUPSSH=setupSSH.expect GRID_PATCH=p27383741_122010_Linux-x86-64.zip PATCH_NUMBER=27383741 SETUPDOCKERORACLEINIT=setupdockeroracleinit.sh DOCKERORACLEINIT=dockeroracleinit GRID_USER_HOME=/home/grid SETUPGRIDENV=setupGridEnv.sh DB_BASE=/export/app/oracle DB_HOME=/export/app/oracle/product/12.2.0/dbhome_1 INSTALL_FILE_2=linuxx64_12201_database.zip DB_INSTALL_RSP=db_inst.rsp DBCA_RSP=dbca.rsp DB_SETUP_FILE=setupDB.sh PWD_FILE=setPassword.sh RUN_FILE=runOracle.sh STOP_FILE=stopOracle.sh ENABLE_RAC_FILE=enableRAC.sh CHECK_DB_FILE=checkDBStatus.sh USER_SCRIPTS_FILE=runUserScripts.sh REMOTE_LISTENER_FILE=remoteListener.sh INSTALL_DB_BINARIES_FILE=installDBBinaries.sh RESET_OS_PASSWORD=resetOSPassword.sh MULTI_NODE_INSTALL=MultiNodeInstall.py FUNCTIONS=functions.sh COMMON_SCRIPTS=/common_scripts CHECK_SPACE_FILE=checkSpace.sh EXPECT=/usr/bin/expect BIN=/usr/sbin container=true INSTALL_SCRIPTS=/opt/scripts/install SCRIPT_DIR=/opt/scripts/startup GRID_PATH=/export/app/12.2.0/grid/bin:/export/app/12.2.0/grid/OPatch/:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin DB_PATH=/export/app/oracle/product/12.2.0/dbhome_1/bin:/export/app/oracle/product/12.2.0/dbhome_1/OPatch/:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin GRID_LD_LIBRARY_PATH=/export/app/12.2.0/grid/lib:/usr/lib:/lib DB_LD_LIBRARY_PATH=/export/app/oracle/product/12.2.0/dbhome_1/lib:/usr/lib:/lib HOME=/home/grid Failed to parse kernel command line, ignoring: No such file or directorysystemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN) Detected virtualization other. Detected architecture x86-64.

Welcome to Oracle Linux Server 7.6!

Set hostname to . Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory /usr/lib/systemd/system-generators/systemd-fstab-generator failed with error code 1. Binding to IPv6 address not available since kernel does not support IPv6. Binding to IPv6 address not available since kernel does not support IPv6. Cannot add dependency job for unit display-manager.service, ignoring: Unit not found. [ OK ] Reached target Local Encrypted Volumes. [ OK ] Started Dispatch Password Requests to Console Directory Watch. [ OK ] Created slice Root Slice. [ OK ] Listening on Journal Socket. [ OK ] Listening on Delayed Shutdown Socket. [ OK ] Listening on /dev/initctl Compatibility Named Pipe. [ OK ] Started Forward Password Requests to Wall Directory Watch. [ OK ] Reached target RPC Port Mapper. [ OK ] Created slice System Slice. Couldn't determine result for ConditionKernelCommandLine=|rd.modules-load for systemd-modules-load.service, assuming failed: No such file or directory Couldn't determine result for ConditionKernelCommandLine=|modules-load for systemd-modules-load.service, assuming failed: No such file or directory [ OK ] Created slice system-getty.slice. Starting Read and set NIS domainname from /etc/sysconfig/network... Starting Rebuild Hardware Database... Starting Journal Service... [ OK ] Reached target Local File Systems (Pre). [ OK ] Created slice User and Session Slice. [ OK ] Reached target Slices. [ OK ] Reached target Swap. Starting Configure read-only root support... [ OK ] Started Read and set NIS domainname from /etc/sysconfig/network. [ OK ] Started Journal Service. Starting Flush Journal to Persistent Storage... [ OK ] Started Flush Journal to Persistent Storage. [ OK ] Started Configure read-only root support. [ OK ] Reached target Local File Systems. Starting Mark the need to relabel after reboot... Starting Rebuild Journal Catalog... Starting Preprocess NFS configuration... Starting Create Volatile Files and Directories... Starting Load/Save Random Seed... [ OK ] Started Mark the need to relabel after reboot. [ OK ] Started Rebuild Journal Catalog. [ OK ] Started Preprocess NFS configuration. [ OK ] Started Create Volatile Files and Directories. Starting Update UTMP about System Boot/Shutdown... Mounting RPC Pipe File System... [ OK ] Started Load/Save Random Seed. [FAILED] Failed to mount RPC Pipe File System. See 'systemctl status var-lib-nfs-rpc_pipefs.mount' for details. [DEPEND] Dependency failed for rpc_pipefs.target. [DEPEND] Dependency failed for RPC security service for NFS client and server. [ OK ] Started Update UTMP about System Boot/Shutdown. [ OK ] Started Rebuild Hardware Database. Starting Update is Completed... [ OK ] Started Update is Completed. [ OK ] Reached target System Initialization. [ OK ] Started Daily Cleanup of Temporary Directories. [ OK ] Reached target Timers. [ OK ] Listening on RPCbind Server Activation Socket. Starting RPC bind service... [ OK ] Started Flexible branding. [ OK ] Reached target Paths. [ OK ] Listening on D-Bus System Message Bus Socket. [ OK ] Reached target Sockets. [ OK ] Reached target Basic System. Starting OpenSSH Server Key Generation... Starting Resets System Activity Logs... Starting GSSAPI Proxy Daemon... Starting Login Service... [ OK ] Started Self Monitoring and Reporting Technology (SMART) Daemon. Starting LSB: Bring up/down networking... [ OK ] Started D-Bus System Message Bus. [ OK ] Started RPC bind service. [ OK ] Started GSSAPI Proxy Daemon. Starting Cleanup of Temporary Directories... [ OK ] Reached target NFS client services. [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. Starting Permit User Sessions... [ OK ] Started Resets System Activity Logs. [ OK ] Started Cleanup of Temporary Directories. [ OK ] Started Login Service. [ OK ] Started Permit User Sessions. [ OK ] Started Command Scheduler. [ OK ] Started OpenSSH Server Key Generation. [ OK ] Started LSB: Bring up/down networking. [ OK ] Reached target Network. [ OK ] Reached target Network is Online. Starting Notify NFS peers of a restart... Starting OpenSSH server daemon... Starting /etc/rc.d/rc.local Compatibility... [ OK ] Started Notify NFS peers of a restart. [ OK ] Started /etc/rc.d/rc.local Compatibility. [ OK ] Started Console Getty. [ OK ] Reached target Login Prompts. [ OK ] Started OpenSSH server daemon. [ OK ] Reached target Multi-User System. [ OK ] Reached target Graphical Interface. Starting Update UTMP about System Runlevel Changes... 08-19-2019 22:57:49 UTC : : ################################################# 08-19-2019 22:57:49 UTC : : Starting Grid Installation
08-19-2019 22:57:49 UTC : : ################################################# 08-19-2019 22:57:49 UTC : : Pre-Grid Setup steps are in process 08-19-2019 22:57:49 UTC : : Process id of the program : 08-19-2019 22:57:49 UTC : : Disable failed service var-lib-nfs-rpc_pipefs.mount Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory 08-19-2019 22:57:49 UTC : : Resetting Failed Services 08-19-2019 22:57:49 UTC : : Sleeping for 60 seconds

Oracle Linux Server 7.6 Kernel 4.1.12-124.25.1.el7uek.x86_64 on an x86_64

racnode2 login: 08-19-2019 22:58:49 UTC : : Systemctl state is running! 08-19-2019 22:58:49 UTC : : Setting correct permissions for /bin/ping 08-19-2019 22:58:49 UTC : : Public IP is set to 172.16.1.151 08-19-2019 22:58:49 UTC : : RAC Node PUBLIC Hostname is set to racnode2 08-19-2019 22:58:49 UTC : : racnode2 already exists : 172.16.1.151 racnode2.example.com racnode2 192.168.17.151 racnode2-priv.example.com racnode2-priv 172.16.1.161 racnode2-vip.example.com racnode2-vip, no update required 08-19-2019 22:58:49 UTC : : racnode2-priv already exists : 192.168.17.151 racnode2-priv.example.com racnode2-priv, no update required 08-19-2019 22:58:49 UTC : : racnode2-vip already exists : 172.16.1.161 racnode2-vip.example.com racnode2-vip, no update required 08-19-2019 22:58:49 UTC : : racnode-scan already exists : 172.16.1.70 racnode-scan.example.com racnode-scan, no update required 08-19-2019 22:58:49 UTC : : Preapring Device list 08-19-2019 22:58:49 UTC : : Changing Disk permission and ownership /dev/asm_disk1 08-19-2019 22:58:49 UTC : : Changing Disk permission and ownership /dev/asm_disk2 08-19-2019 22:58:49 UTC : : ##################################################################### 08-19-2019 22:58:49 UTC : : RAC setup will begin in 2 minutes
08-19-2019 22:58:49 UTC : : #################################################################### 08-19-2019 22:58:51 UTC : : ################################################### 08-19-2019 22:58:51 UTC : : Pre-Grid Setup steps completed 08-19-2019 22:58:51 UTC : : ################################################### 08-19-2019 22:58:51 UTC : : Checking if grid is already configured 08-19-2019 22:58:51 UTC : : Public IP is set to 172.16.1.151 08-19-2019 22:58:51 UTC : : RAC Node PUBLIC Hostname is set to racnode2 08-19-2019 22:58:51 UTC : : Domain is defined to example.com 08-19-2019 22:58:51 UTC : : Setting Existing Cluster Node for node addition operation. This will be retrieved from racnode1 08-19-2019 22:58:51 UTC : : Existing Node Name of the cluster is set to racnode1 08-19-2019 22:59:01 UTC : : 172.16.1.150 08-19-2019 22:59:01 UTC : : Existing Cluster node resolved to IP. Check passed 08-19-2019 22:59:01 UTC : : Default setting of AUTO GNS VIP set to false. If you want to use AUTO GNS VIP, please pass DHCP_CONF as an env parameter set to true 08-19-2019 22:59:01 UTC : : RAC VIP set to 172.16.1.161 08-19-2019 22:59:01 UTC : : RAC Node VIP hostname is set to racnode2-vip 08-19-2019 22:59:01 UTC : : SCAN_NAME name is racnode-scan 08-19-2019 22:59:21 UTC : : 172.16.1.70 08-19-2019 22:59:21 UTC : : SCAN Name resolving to IP. Check Passed! 08-19-2019 22:59:21 UTC : : SCAN_IP name is 172.16.1.70 08-19-2019 22:59:21 UTC : : RAC Node PRIV IP is set to 192.168.17.151 08-19-2019 22:59:21 UTC : : RAC Node private hostname is set to racnode2-priv 08-19-2019 22:59:21 UTC : : CMAN_NAME set to the empty string 08-19-2019 22:59:21 UTC : : CMAN_IP set to the empty string 08-19-2019 22:59:21 UTC : : Password file generated 08-19-2019 22:59:21 UTC : : Common OS Password string is set for Grid user 08-19-2019 22:59:21 UTC : : Common OS Password string is set for Oracle user 08-19-2019 22:59:21 UTC : : GRID_RESPONSE_FILE env variable set to empty. AddNode.sh will use standard cluster responsefile 08-19-2019 22:59:21 UTC : : Location for User script SCRIPT_ROOT set to /common_scripts 08-19-2019 22:59:21 UTC : : ORACLE_SID is set to ORCLCDB1 08-19-2019 22:59:21 UTC : : Setting random password for root/grid/oracle user 08-19-2019 22:59:21 UTC : : Setting random password for grid user 08-19-2019 22:59:21 UTC : : Setting random password for oracle user 08-19-2019 22:59:21 UTC : : Setting random password for root user 08-19-2019 22:59:21 UTC : : Cluster Nodes are racnode1 racnode2 08-19-2019 22:59:21 UTC : : Running SSH setup for grid user between nodes racnode1 racnode2 08-19-2019 22:59:33 UTC : : Running SSH setup for oracle user between nodes racnode1 racnode2 08-19-2019 22:59:44 UTC : : SSH check fine for the racnode1 08-19-2019 22:59:44 UTC : : SSH check fine for the racnode2 08-19-2019 22:59:45 UTC : : SSH check fine for the racnode2 08-19-2019 22:59:45 UTC : : SSH check fine for the oracle@racnode1 08-19-2019 22:59:45 UTC : : SSH check fine for the oracle@racnode2 08-19-2019 22:59:45 UTC : : SSH check fine for the oracle@racnode2 08-19-2019 22:59:45 UTC : : Setting Device permission to grid and asmadmin on all the cluster nodes 08-19-2019 22:59:45 UTC : : Nodes in the cluster racnode2 08-19-2019 22:59:45 UTC : : Setting Device permissions for RAC Install on racnode2 08-19-2019 22:59:45 UTC : : Preapring ASM Device list 08-19-2019 22:59:45 UTC : : Changing Disk permission and ownership 08-19-2019 22:59:45 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode2 08-19-2019 22:59:45 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode2 08-19-2019 22:59:45 UTC : : Populate Rac Env Vars on Remote Hosts 08-19-2019 22:59:45 UTC : : Changing Disk permission and ownership 08-19-2019 22:59:45 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode2 08-19-2019 22:59:46 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode2 08-19-2019 22:59:46 UTC : : Populate Rac Env Vars on Remote Hosts 08-19-2019 22:59:46 UTC : : Checking Cluster Status on racnode1 08-19-2019 22:59:46 UTC : : Checking Cluster 08-19-2019 22:59:46 UTC : : Cluster Check on remote node passed 08-19-2019 22:59:46 UTC : : Cluster Check went fine 08-19-2019 22:59:47 UTC : : MGMTDB Check went fine 08-19-2019 22:59:47 UTC : : CRSD Check went fine 08-19-2019 22:59:47 UTC : : CSSD Check went fine 08-19-2019 22:59:48 UTC : : EVMD Check went fine 08-19-2019 22:59:48 UTC : : Generating Responsefile for node addition 08-19-2019 22:59:48 UTC : : Running Cluster verification utility for new node racnode2 on racnode1 08-19-2019 22:59:48 UTC : : Nodes in the cluster racnode2 08-19-2019 22:59:48 UTC : : ssh to the node racnode1 and executing cvu checks on racnode2 08-19-2019 23:01:20 UTC : : Checking /tmp/cluvfy_check.txt if there is any failed check.

Verifying Physical Memory ...PASSED Verifying Available Physical Memory ...PASSED Verifying Swap Size ...PASSED Verifying Free Space: racnode2:/usr,racnode2:/var,racnode2:/etc,racnode2:/export/app/12.2.0/grid,racnode2:/sbin,racnode2:/tmp ...PASSED Verifying Free Space: racnode1:/usr,racnode1:/var,racnode1:/etc,racnode1:/export/app/12.2.0/grid,racnode1:/sbin,racnode1:/tmp ...PASSED Verifying User Existence: oracle ... Verifying Users With Same UID: 54321 ...PASSED Verifying User Existence: oracle ...PASSED Verifying User Existence: grid ... Verifying Users With Same UID: 54332 ...PASSED Verifying User Existence: grid ...PASSED Verifying User Existence: root ... Verifying Users With Same UID: 0 ...PASSED Verifying User Existence: root ...PASSED Verifying Group Existence: asmadmin ...PASSED Verifying Group Existence: asmdba ...PASSED Verifying Group Existence: oinstall ...PASSED Verifying Group Membership: oinstall ...PASSED Verifying Group Membership: asmdba ...PASSED Verifying Group Membership: asmadmin ...PASSED Verifying Run Level ...PASSED Verifying Hard Limit: maximum open file descriptors ...PASSED Verifying Soft Limit: maximum open file descriptors ...PASSED Verifying Hard Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum stack size ...PASSED Verifying Architecture ...PASSED Verifying OS Kernel Version ...PASSED Verifying OS Kernel Parameter: semmsl ...PASSED Verifying OS Kernel Parameter: semmns ...PASSED Verifying OS Kernel Parameter: semopm ...PASSED Verifying OS Kernel Parameter: semmni ...PASSED Verifying OS Kernel Parameter: shmmax ...PASSED Verifying OS Kernel Parameter: shmmni ...PASSED Verifying OS Kernel Parameter: shmall ...PASSED Verifying OS Kernel Parameter: file-max ...PASSED Verifying OS Kernel Parameter: aio-max-nr ...PASSED Verifying OS Kernel Parameter: panic_on_oops ...PASSED Verifying Package: binutils-2.23.52.0.1 ...PASSED Verifying Package: compat-libcap1-1.10 ...PASSED Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED Verifying Package: sysstat-10.1.5 ...PASSED Verifying Package: ksh ...PASSED Verifying Package: make-3.82 ...PASSED Verifying Package: glibc-2.17 (x86_64) ...PASSED Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED Verifying Package: libaio-0.3.109 (x86_64) ...PASSED Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED Verifying Package: nfs-utils-1.2.3-15 ...PASSED Verifying Package: smartmontools-6.2-4 ...PASSED Verifying Package: net-tools-2.0-0.17 ...PASSED Verifying Users With Same UID: 0 ...PASSED Verifying Current Group ID ...PASSED Verifying Root user consistency ...PASSED Verifying Node Addition ... Verifying CRS Integrity ...PASSED Verifying Clusterware Version Consistency ...PASSED Verifying '/export/app/12.2.0/grid' ...PASSED Verifying Node Addition ...PASSED Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying subnet mask consistency for subnet "172.16.1.0" ...PASSED Verifying subnet mask consistency for subnet "192.168.17.0" ...PASSED Verifying Node Connectivity ...PASSED Verifying Multicast check ...PASSED Verifying ASM Integrity ... Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying subnet mask consistency for subnet "172.16.1.0" ...PASSED Verifying subnet mask consistency for subnet "192.168.17.0" ...PASSED Verifying Node Connectivity ...PASSED Verifying ASM Integrity ...PASSED Verifying Device Checks for ASM ... Verifying ASM device sharedness check ... Verifying Package: cvuqdisk-1.0.10-1 ...PASSED Verifying Shared Storage Accessibility:/dev/asm_disk1,/dev/asm_disk2 ...WARNING (PRVG-1615) Verifying ASM device sharedness check ...WARNING (PRVG-1615) Verifying Access Control List check ...PASSED Verifying Device Checks for ASM ...WARNING (PRVG-1615) Verifying Database home availability ...PASSED Verifying OCR Integrity ...PASSED Verifying Time zone consistency ...PASSED Verifying Network Time Protocol (NTP) ... Verifying '/etc/ntp.conf' ...PASSED Verifying '/var/run/ntpd.pid' ...PASSED Verifying '/var/run/chronyd.pid' ...PASSED Verifying Network Time Protocol (NTP) ...FAILED Verifying User Not In Group "root": grid ...PASSED Verifying resolv.conf Integrity ... Verifying (Linux) resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying DNS/NIS name service ...PASSED Verifying User Equivalence ...PASSED Verifying /boot mount ...PASSED Verifying zeroconf check ...PASSED

Pre-check for node addition was unsuccessful on all the nodes.

Warnings were encountered during execution of CVU verification request "stage -pre nodeadd".

Verifying Device Checks for ASM ...WARNING Verifying ASM device sharedness check ...WARNING Verifying Shared Storage Accessibility:/dev/asm_disk1,/dev/asm_disk2 ...WARNING PRVG-1615 : Virtual environment detected. Skipping shared storage check for disks "/dev/asm_disk2,/dev/asm_disk1".

Verifying Network Time Protocol (NTP) ...FAILED Verifying resolv.conf Integrity ...FAILED racnode2: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1,racnode2 racnode2: PRVG-10048 : Name "racnode2" was not resolved to an address of the specified type by name servers o"127.0.0.11". racnode2: Check for integrity of file "/etc/resolv.conf" failed

racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1,racnode2 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11". racnode1: Check for integrity of file "/etc/resolv.conf" failed

Verifying (Linux) resolv.conf Integrity ...FAILED racnode2: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1,racnode2 racnode2: PRVG-10048 : Name "racnode2" was not resolved to an address of the specified type by name servers o"127.0.0.11".

racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1,racnode2 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11".

CVU operation performed: stage -pre nodeadd Date: Aug 19, 2019 10:59:50 PM CVU home: /export/app/12.2.0/grid/ User: grid 08-19-2019 23:01:20 UTC : : CVU Checks are ignored as IGNORE_CVU_CHECKS set to true. It is recommended to set IGNORE_CVU_CHECKS to false and meet all the cvu checks requirement. RAC installation might fail, if there are failed cvu checks. 08-19-2019 23:01:20 UTC : : Running Node Addition and cluvfy test for node racnode2 08-19-2019 23:01:20 UTC : : Copying /tmp/grid_addnode.rsp on remote node racnode1 08-19-2019 23:01:20 UTC : : Running GridSetup.sh on racnode1 to add the node to existing cluster 08-19-2019 23:01:28 UTC : : Node Addition performed. removing Responsefile 08-19-2019 23:01:28 UTC : : Running root.sh on node racnode2 08-19-2019 23:01:28 UTC : : Nodes in the cluster racnode2 08-19-2019 23:01:28 UTC : : Checking Cluster 08-19-2019 23:01:28 UTC : : Cluster Check failed 08-19-2019 23:01:28 UTC : : Error has occurred in Grid Setup, Please verify! 08-19-2019 23:09:15 UTC : : Process id of the program : 08-19-2019 23:09:15 UTC : : ################################################# 08-19-2019 23:09:15 UTC : : Starting Grid Installation
08-19-2019 23:09:15 UTC : : ################################################# 08-19-2019 23:09:16 UTC : : Pre-Grid Setup steps are in process 08-19-2019 23:09:16 UTC : : Process id of the program : 08-19-2019 23:09:16 UTC : : Sleeping for 60 seconds 08-19-2019 23:10:16 UTC : : Systemctl state is running! 08-19-2019 23:10:16 UTC : : Setting correct permissions for /bin/ping 08-19-2019 23:10:16 UTC : : Public IP is set to 172.16.1.151 08-19-2019 23:10:16 UTC : : RAC Node PUBLIC Hostname is set to racnode2 08-19-2019 23:10:16 UTC : : racnode2 already exists : 172.16.1.151 racnode2.example.com racnode2 192.168.17.151 racnode2-priv.example.com racnode2-priv 172.16.1.161 racnode2-vip.example.com racnode2-vip, no update required 08-19-2019 23:10:16 UTC : : racnode2-priv already exists : 192.168.17.151 racnode2-priv.example.com racnode2-priv, no update required 08-19-2019 23:10:16 UTC : : racnode2-vip already exists : 172.16.1.161 racnode2-vip.example.com racnode2-vip, no update required 08-19-2019 23:10:16 UTC : : racnode-scan already exists : 172.16.1.70 racnode-scan.example.com racnode-scan, no update required 08-19-2019 23:10:16 UTC : : Preapring Device list 08-19-2019 23:10:16 UTC : : Changing Disk permission and ownership /dev/asm_disk1 08-19-2019 23:10:16 UTC : : Changing Disk permission and ownership /dev/asm_disk2 08-19-2019 23:10:16 UTC : : ##################################################################### 08-19-2019 23:10:16 UTC : : RAC setup will begin in 2 minutes
08-19-2019 23:10:16 UTC : : #################################################################### 08-19-2019 23:10:18 UTC : : ################################################### 08-19-2019 23:10:18 UTC : : Pre-Grid Setup steps completed 08-19-2019 23:10:18 UTC : : ################################################### 08-19-2019 23:10:18 UTC : : Checking if grid is already configured 08-19-2019 23:10:18 UTC : : Public IP is set to 172.16.1.151 08-19-2019 23:10:18 UTC : : RAC Node PUBLIC Hostname is set to racnode2 08-19-2019 23:10:18 UTC : : Domain is defined to example.com 08-19-2019 23:10:18 UTC : : Setting Existing Cluster Node for node addition operation. This will be retrieved from racnode1 08-19-2019 23:10:18 UTC : : Existing Node Name of the cluster is set to racnode1 08-19-2019 23:10:28 UTC : : 172.16.1.150 08-19-2019 23:10:28 UTC : : Existing Cluster node resolved to IP. Check passed 08-19-2019 23:10:28 UTC : : Default setting of AUTO GNS VIP set to false. If you want to use AUTO GNS VIP, please pass DHCP_CONF as an env parameter set to true 08-19-2019 23:10:28 UTC : : RAC VIP set to 172.16.1.161 08-19-2019 23:10:28 UTC : : RAC Node VIP hostname is set to racnode2-vip 08-19-2019 23:10:28 UTC : : SCAN_NAME name is racnode-scan 08-19-2019 23:10:48 UTC : : 172.16.1.70 08-19-2019 23:10:48 UTC : : SCAN Name resolving to IP. Check Passed! 08-19-2019 23:10:48 UTC : : SCAN_IP name is 172.16.1.70 08-19-2019 23:10:48 UTC : : RAC Node PRIV IP is set to 192.168.17.151 08-19-2019 23:10:48 UTC : : RAC Node private hostname is set to racnode2-priv 08-19-2019 23:10:48 UTC : : CMAN_NAME set to the empty string 08-19-2019 23:10:48 UTC : : CMAN_IP set to the empty string 08-19-2019 23:10:48 UTC : : Password file generated 08-19-2019 23:10:48 UTC : : Common OS Password string is set for Grid user 08-19-2019 23:10:48 UTC : : Common OS Password string is set for Oracle user 08-19-2019 23:10:48 UTC : : GRID_RESPONSE_FILE env variable set to empty. AddNode.sh will use standard cluster responsefile 08-19-2019 23:10:48 UTC : : Location for User script SCRIPT_ROOT set to /common_scripts 08-19-2019 23:10:48 UTC : : ORACLE_SID is set to ORCLCDB1 08-19-2019 23:10:48 UTC : : Setting random password for root/grid/oracle user 08-19-2019 23:10:48 UTC : : Setting random password for grid user 08-19-2019 23:10:48 UTC : : Setting random password for oracle user 08-19-2019 23:10:48 UTC : : Setting random password for root user 08-19-2019 23:10:48 UTC : : Cluster Nodes are racnode1 racnode2 08-19-2019 23:10:48 UTC : : Running SSH setup for grid user between nodes racnode1 racnode2 08-19-2019 23:10:59 UTC : : Running SSH setup for oracle user between nodes racnode1 racnode2 08-19-2019 23:11:11 UTC : : SSH check fine for the racnode1 08-19-2019 23:11:11 UTC : : SSH check fine for the racnode2 08-19-2019 23:11:11 UTC : : SSH check fine for the racnode2 08-19-2019 23:11:12 UTC : : SSH check fine for the oracle@racnode1 08-19-2019 23:11:12 UTC : : SSH check fine for the oracle@racnode2 08-19-2019 23:11:12 UTC : : SSH check fine for the oracle@racnode2 08-19-2019 23:11:12 UTC : : Setting Device permission to grid and asmadmin on all the cluster nodes 08-19-2019 23:11:12 UTC : : Nodes in the cluster racnode2 08-19-2019 23:11:12 UTC : : Setting Device permissions for RAC Install on racnode2 08-19-2019 23:11:12 UTC : : Preapring ASM Device list 08-19-2019 23:11:12 UTC : : Changing Disk permission and ownership 08-19-2019 23:11:12 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode2 08-19-2019 23:11:12 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode2 08-19-2019 23:11:12 UTC : : Populate Rac Env Vars on Remote Hosts 08-19-2019 23:11:12 UTC : : Changing Disk permission and ownership 08-19-2019 23:11:12 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode2 08-19-2019 23:11:12 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode2 08-19-2019 23:11:13 UTC : : Populate Rac Env Vars on Remote Hosts 08-19-2019 23:11:13 UTC : : Checking Cluster Status on racnode1 08-19-2019 23:11:13 UTC : : Checking Cluster 08-19-2019 23:11:13 UTC : : Cluster Check on remote node passed 08-19-2019 23:11:13 UTC : : Cluster Check went fine 08-19-2019 23:11:14 UTC : : MGMTDB Check went fine 08-19-2019 23:11:14 UTC : : CRSD Check went fine 08-19-2019 23:11:14 UTC : : CSSD Check went fine 08-19-2019 23:11:14 UTC : : EVMD Check went fine 08-19-2019 23:11:14 UTC : : Generating Responsefile for node addition 08-19-2019 23:11:15 UTC : : Running Cluster verification utility for new node racnode2 on racnode1 08-19-2019 23:11:15 UTC : : Moving any exisiting cluvfy /tmp/cluvfy_check.txt to /tmp/cluvfycheck.txt 08-19-2019 23:11:15 UTC : : Nodes in the cluster racnode2 08-19-2019 23:11:15 UTC : : ssh to the node racnode1 and executing cvu checks on racnode2 08-19-2019 23:12:33 UTC : : Checking /tmp/cluvfy_check.txt if there is any failed check.

Verifying Physical Memory ...PASSED Verifying Available Physical Memory ...PASSED Verifying Swap Size ...PASSED Verifying Free Space: racnode2:/usr,racnode2:/var,racnode2:/etc,racnode2:/export/app/12.2.0/grid,racnode2:/sbin,racnode2:/tmp ...PASSED Verifying Free Space: racnode1:/usr,racnode1:/var,racnode1:/etc,racnode1:/export/app/12.2.0/grid,racnode1:/sbin,racnode1:/tmp ...PASSED Verifying User Existence: oracle ... Verifying Users With Same UID: 54321 ...PASSED Verifying User Existence: oracle ...PASSED Verifying User Existence: grid ... Verifying Users With Same UID: 54332 ...PASSED Verifying User Existence: grid ...PASSED Verifying User Existence: root ... Verifying Users With Same UID: 0 ...PASSED Verifying User Existence: root ...PASSED Verifying Group Existence: asmadmin ...PASSED Verifying Group Existence: asmdba ...PASSED Verifying Group Existence: oinstall ...PASSED Verifying Group Membership: oinstall ...PASSED Verifying Group Membership: asmdba ...PASSED Verifying Group Membership: asmadmin ...PASSED Verifying Run Level ...PASSED Verifying Hard Limit: maximum open file descriptors ...PASSED Verifying Soft Limit: maximum open file descriptors ...PASSED Verifying Hard Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum stack size ...PASSED Verifying Architecture ...PASSED Verifying OS Kernel Version ...PASSED Verifying OS Kernel Parameter: semmsl ...PASSED Verifying OS Kernel Parameter: semmns ...PASSED Verifying OS Kernel Parameter: semopm ...PASSED Verifying OS Kernel Parameter: semmni ...PASSED Verifying OS Kernel Parameter: shmmax ...PASSED Verifying OS Kernel Parameter: shmmni ...PASSED Verifying OS Kernel Parameter: shmall ...PASSED Verifying OS Kernel Parameter: file-max ...PASSED Verifying OS Kernel Parameter: aio-max-nr ...PASSED Verifying OS Kernel Parameter: panic_on_oops ...PASSED Verifying Package: binutils-2.23.52.0.1 ...PASSED Verifying Package: compat-libcap1-1.10 ...PASSED Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED Verifying Package: sysstat-10.1.5 ...PASSED Verifying Package: ksh ...PASSED Verifying Package: make-3.82 ...PASSED Verifying Package: glibc-2.17 (x86_64) ...PASSED Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED Verifying Package: libaio-0.3.109 (x86_64) ...PASSED Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED Verifying Package: nfs-utils-1.2.3-15 ...PASSED Verifying Package: smartmontools-6.2-4 ...PASSED Verifying Package: net-tools-2.0-0.17 ...PASSED Verifying Users With Same UID: 0 ...PASSED Verifying Current Group ID ...PASSED Verifying Root user consistency ...PASSED Verifying Node Addition ... Verifying CRS Integrity ...PASSED Verifying Clusterware Version Consistency ...PASSED Verifying '/export/app/12.2.0/grid' ...PASSED Verifying Node Addition ...PASSED Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying subnet mask consistency for subnet "172.16.1.0" ...PASSED Verifying subnet mask consistency for subnet "192.168.17.0" ...PASSED Verifying Node Connectivity ...PASSED Verifying Multicast check ...PASSED Verifying ASM Integrity ... Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying subnet mask consistency for subnet "172.16.1.0" ...PASSED Verifying subnet mask consistency for subnet "192.168.17.0" ...PASSED Verifying Node Connectivity ...PASSED Verifying ASM Integrity ...PASSED Verifying Device Checks for ASM ... Verifying ASM device sharedness check ... Verifying Package: cvuqdisk-1.0.10-1 ...PASSED Verifying Shared Storage Accessibility:/dev/asm_disk1,/dev/asm_disk2 ...WARNING (PRVG-1615) Verifying ASM device sharedness check ...WARNING (PRVG-1615) Verifying Access Control List check ...PASSED Verifying Device Checks for ASM ...WARNING (PRVG-1615) Verifying Database home availability ...PASSED Verifying OCR Integrity ...PASSED Verifying Time zone consistency ...PASSED Verifying Network Time Protocol (NTP) ... Verifying '/etc/ntp.conf' ...PASSED Verifying '/var/run/ntpd.pid' ...PASSED Verifying '/var/run/chronyd.pid' ...PASSED Verifying Network Time Protocol (NTP) ...FAILED Verifying User Not In Group "root": grid ...PASSED Verifying resolv.conf Integrity ... Verifying (Linux) resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying DNS/NIS name service ...PASSED Verifying User Equivalence ...PASSED Verifying /boot mount ...PASSED Verifying zeroconf check ...PASSED

Pre-check for node addition was unsuccessful on all the nodes.

Warnings were encountered during execution of CVU verification request "stage -pre nodeadd".

Verifying Device Checks for ASM ...WARNING Verifying ASM device sharedness check ...WARNING Verifying Shared Storage Accessibility:/dev/asm_disk1,/dev/asm_disk2 ...WARNING PRVG-1615 : Virtual environment detected. Skipping shared storage check for disks "/dev/asm_disk2,/dev/asm_disk1".

Verifying Network Time Protocol (NTP) ...FAILED Verifying resolv.conf Integrity ...FAILED racnode2: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1,racnode2 racnode2: PRVG-10048 : Name "racnode2" was not resolved to an address of the specified type by name servers o"127.0.0.11". racnode2: Check for integrity of file "/etc/resolv.conf" failed

racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1,racnode2 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11". racnode1: Check for integrity of file "/etc/resolv.conf" failed

Verifying (Linux) resolv.conf Integrity ...FAILED racnode2: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1,racnode2 racnode2: PRVG-10048 : Name "racnode2" was not resolved to an address of the specified type by name servers o"127.0.0.11".

racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1,racnode2 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11".

CVU operation performed: stage -pre nodeadd Date: Aug 19, 2019 11:11:17 PM CVU home: /export/app/12.2.0/grid/ User: grid 08-19-2019 23:12:33 UTC : : CVU Checks are ignored as IGNORE_CVU_CHECKS set to true. It is recommended to set IGNORE_CVU_CHECKS to false and meet all the cvu checks requirement. RAC installation might fail, if there are failed cvu checks. 08-19-2019 23:12:33 UTC : : Running Node Addition and cluvfy test for node racnode2 08-19-2019 23:12:33 UTC : : Copying /tmp/grid_addnode.rsp on remote node racnode1 08-19-2019 23:12:33 UTC : : Running GridSetup.sh on racnode1 to add the node to existing cluster 08-19-2019 23:12:41 UTC : : Node Addition performed. removing Responsefile 08-19-2019 23:12:41 UTC : : Running root.sh on node racnode2 08-19-2019 23:12:41 UTC : : Nodes in the cluster racnode2 08-19-2019 23:12:41 UTC : : Checking Cluster 08-19-2019 23:12:41 UTC : : Cluster Check failed 08-19-2019 23:12:41 UTC : : Error has occurred in Grid Setup, Please verify!

psaini79 commented 5 years ago

@babloo2642

Please do the following: docker exec -i-t racnode2 /bin/bash ps -u grid crsctl check cluster

Upload following files: /export/app/12.2.0/grid/install/root_racnode2_2019.log /export/app/oraInventory/logs/GridSetupActions2019 /tmp/grid.rsp /u01/app/19.3.0/grid/crs/install/crsconfig_params

babloo2642 commented 5 years ago

@psaini79

Please find the details below. ################################# [grid@racnode2 ~]$ ps -u grid PID TTY TIME CMD 5005 pts/1 00:00:00 bash 18641 pts/2 00:00:00 bash 18658 pts/2 00:00:00 ps ##################################### [grid@racnode2 ~]$ crsctl check cluster bash: crsctl: command not found ################################################ /export/app/12.2.0/grid/install/root_racnode2_2019.log root_racnode2_2019-08-20_22-20-56-570476528.log.tgz.zip root_racnode2_2019-08-21_03-45-13-263939989.log.tgz.zip root_racnode2_2019-08-21_17-11-44-414924925.log.tgz.zip ################################################# /export/app/oraInventory/logs/GridSetupActions2019

There no file GridSetupActions2019*

bash-4.2# cd /export/app/oraInventory/logs/ bash-4.2# ls OPatch2019-07-28_04-18-48-PM.log oraInstall2019-07-28_04-19-48PM.err cloneActions2019-07-28_04-17-31PM.log oraInstall2019-07-28_04-19-48PM.out installActions2019-07-28_04-19-48PM.log silentInstall2019-07-28_04-17-31PM.log oraInstall2019-07-28_04-17-31PM.err silentInstall2019-07-28_04-19-48PM.log oraInstall2019-07-28_04-17-31PM.out ############################################### /tmp/grid.rsp

There is no response file.

bash-4.2# cd /tmp bash-4.2# ls CVU_12.2.0.1.0_grid cluvfy_check.txt oracle_SetupSSH.log CVU_12.2.0.1.0_oracle grid_SetupSSH.log orod.log CVU_12.2.0.1.0_resource hsperfdata_grid orod.log.20190821-034147 cluvfy_check.20190821-034346.txt hsperfdata_oracle orod.log.20190821-170803 cluvfy_check.20190821-171003.txt hsperfdata_root ##################################################### /export/app/12.2.0/grid/crs/install/crsconfig_params crsconfig_params.tgz.zip

psaini79 commented 5 years ago

@babloo2642

I looked at the logs and I am not sure why it is trying to create a new cluster.

Please paste your container creation command.

babloo2642 commented 5 years ago

@psaini79

Please find the create container command.

docker create -t -i \ --hostname racnode2 \ --volume /dev/shm \ --tmpfs /dev/shm:rw,exec,size=4G \ --volume /boot:/boot:ro \ --dns-search=example.com \ --volume /opt/containers/rac_host_file:/etc/hosts \ --volume /opt/.secrets:/run/secrets \ --device=/dev/sdb2:/dev/asm_disk1 \ --device=/dev/sdb3:/dev/asm_disk2 \ --privileged=false \ --cap-add=SYS_NICE \ --cap-add=SYS_RESOURCE \ --cap-add=NET_ADMIN \ -e EXISTING_CLS_NODES=racnode1 \ -e NODE_VIP=172.16.1.161 \ -e VIP_HOSTNAME=racnode2-vip \ -e PRIV_IP=192.168.17.151 \ -e PRIV_HOSTNAME=racnode2-priv \ -e PUBLIC_IP=172.16.1.151 \ -e PUBLIC_HOSTNAME=racnode2 \ -e DOMAIN=example.com \ -e SCAN_NAME=racnode-scan \ -e SCAN_IP=172.16.1.70 \ -e ASM_DISCOVERY_DIR=/dev \ -e ASM_DEVICE_LIST=/dev/asm_disk1,/dev/asm_disk2 \ -e ORACLE_SID=ORCLCDB \ -e OP_TYPE=ADDNODE \ -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ -e PWD_KEY=pwd.key \ --tmpfs=/run -v /sys/fs/cgroup:/sys/fs/cgroup:ro \ --ulimit rtprio=99 \ --restart=always \ --name racnode2 \ oracle/database-rac:12.2.0.1

babloo2642 commented 5 years ago

@psaini79

I have a question, in the above racnode2 container creation command "ORACLE_SID=ORCLCDB" is that correct? because in the container racnode1 the database name is "ORCLCDB1" Please find the details below: [oracle@racnode1 ~]$ ps -ef|grep pmon grid 1328 1 0 Aug16 ? 00:00:35 asmpmon+ASM1 grid 2089 1 0 Aug16 ? 00:00:32 mdbpmon-MGMTDB oracle 2624 1 0 Aug16 ? 00:00:34 ora_pmon_ORCLCDB1 oracle 15333 15243 0 21:20 pts/2 00:00:00 grep --color=auto pmon

psaini79 commented 5 years ago

@babloo2642

Yes, that is fine. Let me give you manual steps to debug this issue. Please give me sometime I will come back to you.

babloo2642 commented 5 years ago

@psaini79

Ok, Sounds Good. Please take your time.

psaini79 commented 5 years ago

@babloo2642

Please execute following and paste the output: docker exec -i -t racnode2 /bin/bash sudo /bin/bash sh -x /opt/scripts/startup/runOracle.sh

babloo2642 commented 5 years ago

@psaini79

Please find the output below:

[grid@racnode2 ~]$ sudo /bin/bash bash-4.2# sh -x /opt/scripts/startup/runOracle.sh

--- racnode1.example.com ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 3998ms rtt min/avg/max/mdev = 0.037/0.041/0.045/0.002 ms PING racnode2.example.com (172.16.1.151) 56(84) bytes of data. 64 bytes from racnode2.example.com (172.16.1.151): icmp_seq=1 ttl=64 time=0.030 ms 64 bytes from racnode2.example.com (172.16.1.151): icmp_seq=2 ttl=64 time=0.034 ms 64 bytes from racnode2.example.com (172.16.1.151): icmp_seq=3 ttl=64 time=0.032 ms 64 bytes from racnode2.example.com (172.16.1.151): icmp_seq=4 ttl=64 time=0.032 ms 64 bytes from racnode2.example.com (172.16.1.151): icmp_seq=5 ttl=64 time=0.025 ms

--- racnode2.example.com ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 3999ms rtt min/avg/max/mdev = 0.025/0.030/0.034/0.006 ms Remote host reachability check succeeded. The following hosts are reachable: racnode1 racnode2. The following hosts are not reachable: . All hosts are reachable. Proceeding further... firsthost racnode1 numhosts 2 The script will setup SSH connectivity from the host racnode2 to all the remote hosts. After the script is executed, the user can use SSH to run commands on the remote hosts or copy files between this host racnode2 and the remote hosts without being prompted for passwords or confirmations.

NOTE 1: As part of the setup procedure, this script will use ssh and scp to copy files between the local host and the remote hosts. Since the script does not store passwords, you may be prompted for the passwords during the execution of the script whenever ssh or scp is invoked.

NOTE 2: AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEGES TO THESE directories.

Do you want to continue and let the script make the above mentioned changes (yes/no)? Confirmation provided on the command line

The user chose yes User chose to skip passphrase related questions. Creating .ssh directory on local host, if not present already Creating authorized_keys file on local host Changing permissions on authorized_keys to 644 on local host Creating known_hosts file on local host Changing permissions on known_hosts to 644 on local host Creating config file on local host If a config file exists already at /home/grid/.ssh/config, it would be backed up to /home/grid/.ssh/config.backup. Creating .ssh directory and setting permissions on remote host racnode1 THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR grid. THIS IS AN SSH REQUIREMENT. The script would create ~grid/.ssh/config file on remote host racnode1. If a config file exists already at ~grid/.ssh/config, it would be backed up to ~grid/.ssh/config.backup. The user may be prompted for a password here since the script would be running SSH on host racnode1. Warning: Permanently added 'racnode1,172.16.1.150' (ECDSA) to the list of known hosts. Done with creating .ssh directory and setting permissions on remote host racnode1. Creating .ssh directory and setting permissions on remote host racnode2 THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR grid. THIS IS AN SSH REQUIREMENT. The script would create ~grid/.ssh/config file on remote host racnode2. If a config file exists already at ~grid/.ssh/config, it would be backed up to ~grid/.ssh/config.backup. The user may be prompted for a password here since the script would be running SSH on host racnode2. Warning: Permanently added 'racnode2,172.16.1.151' (ECDSA) to the list of known hosts. grid@racnode2's password: Done with creating .ssh directory and setting permissions on remote host racnode2. Copying local host public key to the remote host racnode1 The user may be prompted for a password or passphrase here since the script would be using SCP for host racnode1. Done copying local host public key to the remote host racnode1 Copying local host public key to the remote host racnode2 The user may be prompted for a password or passphrase here since the script would be using SCP for host racnode2. grid@racnode2's password: Done copying local host public key to the remote host racnode2 Creating keys on remote host racnode1 if they do not exist already. This is required to setup SSH on host racnode1.

Creating keys on remote host racnode2 if they do not exist already. This is required to setup SSH on host racnode2.

Updating authorized_keys file on remote host racnode1 Updating known_hosts file on remote host racnode1 Updating authorized_keys file on remote host racnode2 Updating known_hosts file on remote host racnode2 cat: /home/grid/.ssh/known_hosts.tmp: No such file or directory cat: /home/grid/.ssh/authorized_keys.tmp: No such file or directory SSH setup is complete.


Verifying SSH setup

The script will now run the date command on the remote nodes using ssh to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP, THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR PASSWORDS. If you see any output other than date or are prompted for the password, ssh is not setup correctly and you will need to resolve the issue and set up ssh again. The possible causes for failure could be:

  1. The server settings in /etc/ssh/sshd_config file do not allow ssh for user grid.
  2. The server may have disabled public key based authentication.
  3. The client public key on the server may be outdated.
  4. ~grid or ~grid/.ssh on the remote host may not be owned by grid.
  5. User may not have passed -shared option for shared remote users or may be passing the -shared option for non-shared remote users.
  6. If there is output in addition to the date, but no password is asked, it may be a security alert shown as part of company policy. Append the additional text to the /sysman/prov/resources/ignoreMessages.txt file.

    --racnode1:-- Running /usr/bin/ssh -x -l grid racnode1 date to verify SSH connectivity has been setup from local host to racnode1. IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR. Sat Aug 24 17:38:16 UTC 2019

    --racnode2:-- Running /usr/bin/ssh -x -l grid racnode2 date to verify SSH connectivity has been setup from local host to racnode2. IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR. Sat Aug 24 17:38:16 UTC 2019


    Verifying SSH connectivity has been setup from racnode1 to racnode1

    IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Sat Aug 24 17:38:17 UTC 2019


    Verifying SSH connectivity has been setup from racnode1 to racnode2

    IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Sat Aug 24 17:38:17 UTC 2019

    -Verification from racnode1 complete-

    Verifying SSH connectivity has been setup from racnode2 to racnode1

    IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Sat Aug 24 17:38:17 UTC 2019


    Verifying SSH connectivity has been setup from racnode2 to racnode2

    IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Sat Aug 24 17:38:17 UTC 2019

    -Verification from racnode2 complete- SSH verification complete. spawn /export/app/oracle/product/12.2.0/dbhome_1/oui/prov/resources/scripts/sshUserSetup.sh -user oracle -hosts racnode1 racnode2 -logfile /tmp/oracle_SetupSSH.log -advanced -exverify -noPromptPassphrase -confirm The output of this script is also logged into /tmp/oracle_SetupSSH.log Hosts are racnode1 racnode2 user is oracle Platform:- Linux Checking if the remote hosts are reachable PING racnode1.example.com (172.16.1.150) 56(84) bytes of data. 64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=1 ttl=64 time=0.035 ms 64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=2 ttl=64 time=0.034 ms 64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=3 ttl=64 time=0.031 ms 64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=4 ttl=64 time=0.030 ms 64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=5 ttl=64 time=0.039 ms

--- racnode1.example.com ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 3999ms rtt min/avg/max/mdev = 0.030/0.033/0.039/0.008 ms PING racnode2.example.com (172.16.1.151) 56(84) bytes of data. 64 bytes from racnode2.example.com (172.16.1.151): icmp_seq=1 ttl=64 time=0.033 ms 64 bytes from racnode2.example.com (172.16.1.151): icmp_seq=2 ttl=64 time=0.030 ms 64 bytes from racnode2.example.com (172.16.1.151): icmp_seq=3 ttl=64 time=0.033 ms 64 bytes from racnode2.example.com (172.16.1.151): icmp_seq=4 ttl=64 time=0.025 ms 64 bytes from racnode2.example.com (172.16.1.151): icmp_seq=5 ttl=64 time=0.032 ms

--- racnode2.example.com ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4000ms rtt min/avg/max/mdev = 0.025/0.030/0.033/0.006 ms Remote host reachability check succeeded. The following hosts are reachable: racnode1 racnode2. The following hosts are not reachable: . All hosts are reachable. Proceeding further... firsthost racnode1 numhosts 2 The script will setup SSH connectivity from the host racnode2 to all the remote hosts. After the script is executed, the user can use SSH to run commands on the remote hosts or copy files between this host racnode2 and the remote hosts without being prompted for passwords or confirmations.

NOTE 1: As part of the setup procedure, this script will use ssh and scp to copy files between the local host and the remote hosts. Since the script does not store passwords, you may be prompted for the passwords during the execution of the script whenever ssh or scp is invoked.

NOTE 2: AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEGES TO THESE directories.

Do you want to continue and let the script make the above mentioned changes (yes/no)? Confirmation provided on the command line

The user chose yes User chose to skip passphrase related questions. Creating .ssh directory on local host, if not present already Creating authorized_keys file on local host Changing permissions on authorized_keys to 644 on local host Creating known_hosts file on local host Changing permissions on known_hosts to 644 on local host Creating config file on local host If a config file exists already at /home/oracle/.ssh/config, it would be backed up to /home/oracle/.ssh/config.backup. Creating .ssh directory and setting permissions on remote host racnode1 THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR oracle. THIS IS AN SSH REQUIREMENT. The script would create ~oracle/.ssh/config file on remote host racnode1. If a config file exists already at ~oracle/.ssh/config, it would be backed up to ~oracle/.ssh/config.backup. The user may be prompted for a password here since the script would be running SSH on host racnode1. Warning: Permanently added 'racnode1,172.16.1.150' (ECDSA) to the list of known hosts. Done with creating .ssh directory and setting permissions on remote host racnode1. Creating .ssh directory and setting permissions on remote host racnode2 THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR oracle. THIS IS AN SSH REQUIREMENT. The script would create ~oracle/.ssh/config file on remote host racnode2. If a config file exists already at ~oracle/.ssh/config, it would be backed up to ~oracle/.ssh/config.backup. The user may be prompted for a password here since the script would be running SSH on host racnode2. Warning: Permanently added 'racnode2,172.16.1.151' (ECDSA) to the list of known hosts. oracle@racnode2's password: Done with creating .ssh directory and setting permissions on remote host racnode2. Copying local host public key to the remote host racnode1 The user may be prompted for a password or passphrase here since the script would be using SCP for host racnode1. Done copying local host public key to the remote host racnode1 Copying local host public key to the remote host racnode2 The user may be prompted for a password or passphrase here since the script would be using SCP for host racnode2. oracle@racnode2's password: Done copying local host public key to the remote host racnode2 Creating keys on remote host racnode1 if they do not exist already. This is required to setup SSH on host racnode1.

Creating keys on remote host racnode2 if they do not exist already. This is required to setup SSH on host racnode2.

Updating authorized_keys file on remote host racnode1 Updating known_hosts file on remote host racnode1 Updating authorized_keys file on remote host racnode2 Updating known_hosts file on remote host racnode2 cat: /home/oracle/.ssh/known_hosts.tmp: No such file or directory cat: /home/oracle/.ssh/authorized_keys.tmp: No such file or directory SSH setup is complete.


Verifying SSH setup

The script will now run the date command on the remote nodes using ssh to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP, THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR PASSWORDS. If you see any output other than date or are prompted for the password, ssh is not setup correctly and you will need to resolve the issue and set up ssh again. The possible causes for failure could be:

  1. The server settings in /etc/ssh/sshd_config file do not allow ssh for user oracle.
  2. The server may have disabled public key based authentication.
  3. The client public key on the server may be outdated.
  4. ~oracle or ~oracle/.ssh on the remote host may not be owned by oracle.
  5. User may not have passed -shared option for shared remote users or may be passing the -shared option for non-shared remote users.
  6. If there is output in addition to the date, but no password is asked, it may be a security alert shown as part of company policy. Append the additional text to the /sysman/prov/resources/ignoreMessages.txt file.

    --racnode1:-- Running /usr/bin/ssh -x -l oracle racnode1 date to verify SSH connectivity has been setup from local host to racnode1. IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR. Sat Aug 24 17:38:28 UTC 2019

    --racnode2:-- Running /usr/bin/ssh -x -l oracle racnode2 date to verify SSH connectivity has been setup from local host to racnode2. IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR. Sat Aug 24 17:38:28 UTC 2019


    Verifying SSH connectivity has been setup from racnode1 to racnode1

    IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Sat Aug 24 17:38:28 UTC 2019


    Verifying SSH connectivity has been setup from racnode1 to racnode2

    IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Sat Aug 24 17:38:28 UTC 2019

    -Verification from racnode1 complete-

    Verifying SSH connectivity has been setup from racnode2 to racnode1

    IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Sat Aug 24 17:38:29 UTC 2019


    Verifying SSH connectivity has been setup from racnode2 to racnode2

    IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Sat Aug 24 17:38:29 UTC 2019

    -Verification from racnode2 complete- SSH verification complete. su - $GRID_USER -c "ssh -o BatchMode=yes -o ConnectTimeout=5 $GRID_USER@$node echo ok 2>&1" su - $ORACLE_USER -c "ssh -o BatchMode=yes -o ConnectTimeout=5 $ORACLE_USER@$node echo ok 2>&1" -bash: /etc/rac_env_vars: Permission denied -bash: /etc/rac_env_vars: Permission denied CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online Database is enabled Instance -MGMTDB is running on node racnode1 CRS-272: This command remains for backward compatibility only Cluster Ready Services is online CRS-272: This command remains for backward compatibility only Cluster Synchronization Services is online CRS-272: This command remains for backward compatibility only Event Manager is online

Verifying Physical Memory ...PASSED Verifying Available Physical Memory ...PASSED Verifying Swap Size ...PASSED Verifying Free Space: racnode2:/usr,racnode2:/var,racnode2:/etc,racnode2:/export/app/12.2.0/grid,racnode2:/sbin,racnode2:/tmp ...PASSED Verifying Free Space: racnode1:/usr,racnode1:/var,racnode1:/etc,racnode1:/export/app/12.2.0/grid,racnode1:/sbin,racnode1:/tmp ...PASSED Verifying User Existence: oracle ... Verifying Users With Same UID: 54321 ...PASSED Verifying User Existence: oracle ...PASSED Verifying User Existence: grid ... Verifying Users With Same UID: 54332 ...PASSED Verifying User Existence: grid ...PASSED Verifying User Existence: root ... Verifying Users With Same UID: 0 ...PASSED Verifying User Existence: root ...PASSED Verifying Group Existence: asmadmin ...PASSED Verifying Group Existence: asmdba ...PASSED Verifying Group Existence: oinstall ...PASSED Verifying Group Membership: oinstall ...PASSED Verifying Group Membership: asmdba ...PASSED Verifying Group Membership: asmadmin ...PASSED Verifying Run Level ...PASSED Verifying Hard Limit: maximum open file descriptors ...PASSED Verifying Soft Limit: maximum open file descriptors ...PASSED Verifying Hard Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum stack size ...PASSED Verifying Architecture ...PASSED Verifying OS Kernel Version ...PASSED Verifying OS Kernel Parameter: semmsl ...PASSED Verifying OS Kernel Parameter: semmns ...PASSED Verifying OS Kernel Parameter: semopm ...PASSED Verifying OS Kernel Parameter: semmni ...PASSED Verifying OS Kernel Parameter: shmmax ...PASSED Verifying OS Kernel Parameter: shmmni ...PASSED Verifying OS Kernel Parameter: shmall ...PASSED Verifying OS Kernel Parameter: file-max ...PASSED Verifying OS Kernel Parameter: aio-max-nr ...PASSED Verifying OS Kernel Parameter: panic_on_oops ...PASSED Verifying Package: binutils-2.23.52.0.1 ...PASSED Verifying Package: compat-libcap1-1.10 ...PASSED Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED Verifying Package: sysstat-10.1.5 ...PASSED Verifying Package: ksh ...PASSED Verifying Package: make-3.82 ...PASSED Verifying Package: glibc-2.17 (x86_64) ...PASSED Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED Verifying Package: libaio-0.3.109 (x86_64) ...PASSED Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED Verifying Package: nfs-utils-1.2.3-15 ...PASSED Verifying Package: smartmontools-6.2-4 ...PASSED Verifying Package: net-tools-2.0-0.17 ...PASSED Verifying Users With Same UID: 0 ...PASSED Verifying Current Group ID ...PASSED Verifying Root user consistency ...PASSED Verifying Node Addition ... Verifying CRS Integrity ...PASSED Verifying Clusterware Version Consistency ...PASSED Verifying '/export/app/12.2.0/grid' ...PASSED Verifying Node Addition ...PASSED Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying subnet mask consistency for subnet "172.16.1.0" ...PASSED Verifying subnet mask consistency for subnet "192.168.17.0" ...PASSED Verifying Node Connectivity ...PASSED Verifying Multicast check ...PASSED Verifying ASM Integrity ... Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying subnet mask consistency for subnet "172.16.1.0" ...PASSED Verifying subnet mask consistency for subnet "192.168.17.0" ...PASSED Verifying Node Connectivity ...PASSED Verifying ASM Integrity ...PASSED Verifying Device Checks for ASM ... Verifying ASM device sharedness check ... Verifying Package: cvuqdisk-1.0.10-1 ...PASSED Verifying Shared Storage Accessibility:/dev/asm_disk1,/dev/asm_disk2 ...WARNING (PRVG-1615) Verifying ASM device sharedness check ...WARNING (PRVG-1615) Verifying Access Control List check ...PASSED Verifying Device Checks for ASM ...WARNING (PRVG-1615) Verifying Database home availability ...PASSED Verifying OCR Integrity ...PASSED Verifying Time zone consistency ...PASSED Verifying Network Time Protocol (NTP) ... Verifying '/etc/ntp.conf' ...PASSED Verifying '/var/run/ntpd.pid' ...PASSED Verifying '/var/run/chronyd.pid' ...PASSED Verifying Network Time Protocol (NTP) ...FAILED Verifying User Not In Group "root": grid ...PASSED Verifying resolv.conf Integrity ... Verifying (Linux) resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying DNS/NIS name service ...PASSED Verifying User Equivalence ...PASSED Verifying /boot mount ...PASSED Verifying zeroconf check ...PASSED

Pre-check for node addition was unsuccessful on all the nodes.

Warnings were encountered during execution of CVU verification request "stage -pre nodeadd".

Verifying Device Checks for ASM ...WARNING Verifying ASM device sharedness check ...WARNING Verifying Shared Storage Accessibility:/dev/asm_disk1,/dev/asm_disk2 ...WARNING PRVG-1615 : Virtual environment detected. Skipping shared storage check for disks "/dev/asm_disk2,/dev/asm_disk1".

Verifying Network Time Protocol (NTP) ...FAILED Verifying resolv.conf Integrity ...FAILED racnode2: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1,racnode2 racnode2: PRVG-10048 : Name "racnode2" was not resolved to an address of the specified type by name servers o"127.0.0.11". racnode2: Check for integrity of file "/etc/resolv.conf" failed

racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1,racnode2 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11". racnode1: Check for integrity of file "/etc/resolv.conf" failed

Verifying (Linux) resolv.conf Integrity ...FAILED racnode2: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1,racnode2 racnode2: PRVG-10048 : Name "racnode2" was not resolved to an address of the specified type by name servers o"127.0.0.11".

racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1,racnode2 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11".

CVU operation performed: stage -pre nodeadd Date: Aug 24, 2019 5:38:35 PM CVU home: /export/app/12.2.0/grid/ User: grid Launching Oracle Grid Infrastructure Setup Wizard...

[FATAL] [INS-43042] The cluster nodes [racnode2] specified for addnode is already part of a cluster. CAUSE: Cluster nodes specified already has clusterware configured. ACTION: Ensure that the nodes that do not have clusterware configured are provided for addnode operation. Check /export/app/12.2.0/grid/install/root_racnode2_2019-08-24_17-40-01-641436580.log for the output of root script -bash: /export/app/12.2.0/grid/bin/crsctl: No such file or directory