oracle / docker-images

Official source of container configurations, images, and examples for Oracle products and projects
https://developer.oracle.com/use-cases/#containers
Universal Permissive License v1.0
6.53k stars 5.42k forks source link

Unable to create DB on 12.2.0.1 RAC container #1339

Closed babloo2642 closed 5 years ago

babloo2642 commented 5 years ago

Verifying Time zone consistency ...PASSED

Verifying VIP Subnet configuration check ...PASSED

Verifying resolv.conf Integrity ...

Verifying (Linux) resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048)

Verifying resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048)

Verifying DNS/NIS name service ...

Verifying Name Service Switch Configuration File Integrity ...PASSED

Verifying DNS/NIS name service ...FAILED (PRVG-1101)

Verifying Single Client Access Name (SCAN) ...PASSED

Verifying Domain Sockets ...PASSED

Verifying /boot mount ...PASSED

Verifying Daemon "avahi-daemon" not configured and running ...PASSED

Verifying Daemon "proxyt" not configured and running ...PASSED

Verifying loopback network interface address ...PASSED

Verifying Oracle base: /export/app/grid ...

Verifying '/export/app/grid' ...PASSED

Verifying Oracle base: /export/app/grid ...PASSED

Verifying User Equivalence ...PASSED

Verifying Network interface bonding status of private interconnect network interfaces ...PASSED

Verifying File system mount options for path /var ...PASSED

Verifying zeroconf check ...PASSED

Verifying ASM Filter Driver configuration ...PASSED

Pre-check for cluster services setup was unsuccessful on all the nodes.

Warnings were encountered during execution of CVU verification request "stage -pre crsinst".

Verifying Device Checks for ASM ...WARNING

Verifying ASM device sharedness check ...WARNING

Verifying Shared Storage Accessibility:/dev/asm_disk2,/dev/asm_disk1

...WARNING

PRVG-1615 : Virtual environment detected. Skipping shared storage check for

disks "/dev/asm_disk2,/dev/asm_disk1".

Verifying Network Time Protocol (NTP) ...FAILED

Verifying resolv.conf Integrity ...FAILED

racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded

      "15000" ms on following nodes: racnode1

racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the

      specified type by name servers o"127.0.0.11".

racnode1: Check for integrity of file "/etc/resolv.conf" failed

Verifying (Linux) resolv.conf Integrity ...FAILED

racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded

        "15000" ms on following nodes: racnode1

racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the

        specified type by name servers o"127.0.0.11".

Verifying DNS/NIS name service ...FAILED

PRVG-1101 : SCAN name "racnode-scan" failed to resolve

CVU operation performed: stage -pre crsinst

Date: Jul 19, 2019 2:17:31 AM

CVU home: /export/app/12.2.0/grid/

User: grid

07-19-2019 02:18:39 UTC : : CVU Checks are ignored as IGNORE_CVU_CHECKS set to true. It is recommended to set IGNORE_CVU_CHECKS to false and meet all the cvu checks requirement. RAC installation might fail, if there are failed cvu checks.

07-19-2019 02:18:39 UTC : : Running Grid Installation

07-19-2019 02:18:56 UTC : : Running root.sh

07-19-2019 02:18:56 UTC : : Nodes in the cluster racnode1

07-19-2019 02:18:56 UTC : : Running root.sh on racnode1

07-19-2019 02:18:57 UTC : : Running post root.sh steps

07-19-2019 02:18:57 UTC : : Running post root.sh steps to setup Grid env

07-19-2019 02:19:03 UTC : : Checking Cluster Status

07-19-2019 02:19:03 UTC : : Nodes in the cluster

07-19-2019 02:19:03 UTC : : Removing /tmp/cluvfy_check.txt as cluster check has passed

07-19-2019 02:19:03 UTC : : Generating DB Responsefile Running DB creation

07-19-2019 02:19:03 UTC : : Running DB creation

07-19-2019 02:19:14 UTC : : Checking DB status

07-19-2019 02:19:15 UTC : : ORCLCDB is not up and running on racnode1

07-19-2019 02:19:15 UTC : : Error has occurred in Grid Setup, Please verify!

psaini79 commented 5 years ago

@babloo2642 Ok. Seems to be some configuration issue. Please share following: docker exec -i -t racnode1 /bin/bash check for /tmp/grid_addnode.rsp

If the file is there, please attach it here.

babloo2642 commented 5 years ago

@psaini79

There is no file /tmp/grid_addnode.rsp

[grid@racnode1 tmp]$ ls CVU_12.2.0.1.0_grid hsperfdata_root CVU_12.2.0.1.0_oracle oracle_SetupSSH.log CVU_12.2.0.1.0_resource orod.log db_status.txt orod.log.20190812-034500 dbca.rsp orod.log.20190814-191650 grid.rsp orod.log.20190815-055332 grid.rsp.tgz orod.log.20190815-065023 grid_SetupSSH.log orod.log.20190815-071953 hsperfdata_grid orod.log.20190816-021719 hsperfdata_oracle tfa_install_22114_2019_08_15-06_53_57.log

psaini79 commented 5 years ago

@babloo2642

Please do following: Download the attached grid_addnode.zip , copy and unzip it on racnode1 under /tmp. Execute following steps on racnode1 as grid user: $GRID_HOME/runcluvfy.sh stage -pre nodeadd -n racnode2 $GRID_HOME/gridSetup.sh -silent -waitForCompletion -noCopy -skipPrereqs -responseFile /tmp/grid_addnode.rsp grid_addnode.zip

Paste the output once the command completes.

babloo2642 commented 5 years ago

@psaini79

Please find the below output:

[grid@racnode1 tmp]$ $GRID_HOME/runcluvfy.sh stage -pre nodeadd -n racnode2

Verifying Physical Memory ...PASSED Verifying Available Physical Memory ...PASSED Verifying Swap Size ...PASSED Verifying Free Space: racnode2:/usr,racnode2:/var,racnode2:/etc,racnode2:/export/app/12.2.0/grid,racnode2:/sbin,racnode2:/tmp ...PASSED Verifying Free Space: racnode1:/usr,racnode1:/var,racnode1:/etc,racnode1:/export/app/12.2.0/grid,racnode1:/sbin,racnode1:/tmp ...PASSED Verifying User Existence: oracle ... Verifying Users With Same UID: 54321 ...PASSED Verifying User Existence: oracle ...PASSED Verifying User Existence: grid ... Verifying Users With Same UID: 54332 ...PASSED Verifying User Existence: grid ...PASSED Verifying User Existence: root ... Verifying Users With Same UID: 0 ...PASSED Verifying User Existence: root ...PASSED Verifying Group Existence: asmadmin ...PASSED Verifying Group Existence: asmdba ...PASSED Verifying Group Existence: oinstall ...PASSED Verifying Group Membership: oinstall ...PASSED Verifying Group Membership: asmdba ...PASSED Verifying Group Membership: asmadmin ...PASSED Verifying Run Level ...PASSED Verifying Hard Limit: maximum open file descriptors ...PASSED Verifying Soft Limit: maximum open file descriptors ...PASSED Verifying Hard Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum stack size ...PASSED Verifying Architecture ...PASSED Verifying OS Kernel Version ...PASSED Verifying OS Kernel Parameter: semmsl ...PASSED Verifying OS Kernel Parameter: semmns ...PASSED Verifying OS Kernel Parameter: semopm ...PASSED Verifying OS Kernel Parameter: semmni ...PASSED Verifying OS Kernel Parameter: shmmax ...PASSED Verifying OS Kernel Parameter: shmmni ...PASSED Verifying OS Kernel Parameter: shmall ...PASSED Verifying OS Kernel Parameter: file-max ...PASSED Verifying OS Kernel Parameter: aio-max-nr ...PASSED Verifying OS Kernel Parameter: panic_on_oops ...PASSED Verifying Package: binutils-2.23.52.0.1 ...PASSED Verifying Package: compat-libcap1-1.10 ...PASSED Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED Verifying Package: sysstat-10.1.5 ...PASSED Verifying Package: ksh ...PASSED Verifying Package: make-3.82 ...PASSED Verifying Package: glibc-2.17 (x86_64) ...PASSED Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED Verifying Package: libaio-0.3.109 (x86_64) ...PASSED Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED Verifying Package: nfs-utils-1.2.3-15 ...PASSED Verifying Package: smartmontools-6.2-4 ...PASSED Verifying Package: net-tools-2.0-0.17 ...PASSED Verifying Users With Same UID: 0 ...PASSED Verifying Current Group ID ...PASSED Verifying Root user consistency ...PASSED Verifying Node Addition ... Verifying CRS Integrity ...PASSED Verifying Clusterware Version Consistency ...PASSED Verifying '/export/app/12.2.0/grid' ...PASSED Verifying Node Addition ...PASSED Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying subnet mask consistency for subnet "172.16.1.0" ...PASSED Verifying subnet mask consistency for subnet "192.168.17.0" ...PASSED Verifying Node Connectivity ...PASSED Verifying Multicast check ...PASSED Verifying ASM Integrity ... Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying subnet mask consistency for subnet "172.16.1.0" ...PASSED Verifying subnet mask consistency for subnet "192.168.17.0" ...PASSED Verifying Node Connectivity ...PASSED Verifying ASM Integrity ...PASSED Verifying Device Checks for ASM ... Verifying ASM device sharedness check ... Verifying Package: cvuqdisk-1.0.10-1 ...PASSED Verifying Shared Storage Accessibility:/dev/asm_disk1,/dev/asm_disk2 ...WARNING (PRVG-1615) Verifying ASM device sharedness check ...WARNING (PRVG-1615) Verifying Access Control List check ...PASSED Verifying Device Checks for ASM ...WARNING (PRVG-1615) Verifying Database home availability ...PASSED Verifying OCR Integrity ...PASSED Verifying Time zone consistency ...PASSED Verifying Network Time Protocol (NTP) ... Verifying '/etc/ntp.conf' ...PASSED Verifying '/var/run/ntpd.pid' ...PASSED Verifying '/var/run/chronyd.pid' ...PASSED Verifying Network Time Protocol (NTP) ...FAILED Verifying User Not In Group "root": grid ...PASSED Verifying resolv.conf Integrity ... Verifying (Linux) resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying DNS/NIS name service ...PASSED Verifying User Equivalence ...PASSED Verifying /boot mount ...PASSED Verifying zeroconf check ...PASSED

Pre-check for node addition was unsuccessful on all the nodes.

Warnings were encountered during execution of CVU verification request "stage -pre nodeadd".

Verifying Device Checks for ASM ...WARNING Verifying ASM device sharedness check ...WARNING Verifying Shared Storage Accessibility:/dev/asm_disk1,/dev/asm_disk2 ...WARNING PRVG-1615 : Virtual environment detected. Skipping shared storage check for disks "/dev/asm_disk2,/dev/asm_disk1".

Verifying Network Time Protocol (NTP) ...FAILED Verifying resolv.conf Integrity ...FAILED racnode2: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1,racnode2 racnode2: PRVG-10048 : Name "racnode2" was not resolved to an address of the specified type by name servers o"127.0.0.11". racnode2: Check for integrity of file "/etc/resolv.conf" failed

racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1,racnode2 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11". racnode1: Check for integrity of file "/etc/resolv.conf" failed

Verifying (Linux) resolv.conf Integrity ...FAILED racnode2: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1,racnode2 racnode2: PRVG-10048 : Name "racnode2" was not resolved to an address of the specified type by name servers o"127.0.0.11".

racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1,racnode2 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11".

CVU operation performed: stage -pre nodeadd Date: Aug 24, 2019 7:46:56 PM CVU home: /export/app/12.2.0/grid/ User: grid [grid@racnode1 tmp]$ cd [grid@racnode1 ~]$ $GRID_HOME/gridSetup.sh -silent -waitForCompletion -noCopy -skipPrereqs -responseFile /tmp/grid_addnode.rsp Launching Oracle Grid Infrastructure Setup Wizard...

[FATAL] [INS-43042] The cluster nodes [racnode2] specified for addnode is already part of a cluster. CAUSE: Cluster nodes specified already has clusterware configured. ACTION: Ensure that the nodes that do not have clusterware configured are provided for addnode operation. [grid@racnode1 ~]$

psaini79 commented 5 years ago

@babloo2642

Please execute following steps: login to racnode2 docker exec -i -t racnode2 /bin/bash sudo /bin/bash

sh -x /export/app/12.2.0/grid/root.sh

Attach following file: /export/app/12.2.0/grid/crs/install/crsconfig_params

Also, paste your /etc/hosts output.

babloo2642 commented 5 years ago

@psaini79

Please find the below output:

[grid@racnode2 ~]$ sudo /bin/bash bash-4.2# bash-4.2# sh -x /export/app/12.2.0/grid/root.sh

172.16.1.150 racnode1.example.com racnode1

192.168.17.150 racnode1-priv.example.com racnode1-priv

172.16.1.160 racnode1-vip.example.com racnode1-vip

172.16.1.70 racnode-scan.example.com racnode-scan

172.16.1.15 racnode-cman1.example.com racnode-cman1

172.16.1.151 racnode2.example.com racnode2

192.168.17.151 racnode2-priv.example.com racnode2-priv

172.16.1.161 racnode2-vip.example.com racnode2-vip

172.16.1.152 racnode3.example.com racnode3

192.168.17.152 racnode3-priv.example.com racnode3-priv

172.16.1.162 racnode3-vip.example.com racnode3-vip [grid@racnode2 ~]$ ########################################## Please file the zip file below: crsconfig_params.tgz.zip

psaini79 commented 5 years ago

@babloo2642

Seems crsconfig_params is not getting populated. I recently merged new code in Github. Is it possible for you to delete racnode1 and racnode2 and docker RAC image.

Clone the repo again , build the image and build RAC node containers racnode1 and racnode2?

Let me know if you are OK with this?

babloo2642 commented 5 years ago

@psaini79

Yes, I can do that. Not a problem. I'll update you once I'm done with it. Maybe by Monday or Tuesday . Are you available in the next week to help me?

psaini79 commented 5 years ago

@babloo2642

Yes, I will assist you next week if you see any issue. I again tested on my machine and it worked without any issue.

Keep me posted if you see any issue.

babloo2642 commented 5 years ago

@psaini79

I have re-built the RAC image and started racnode1 container, but DB is not up and running. Can you please look into the logs.

docker logs -f racnode1

PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=racnode1 TERM=xterm NODE_VIP=172.16.1.160 VIP_HOSTNAME=racnode1-vip PRIV_IP=192.168.17.150 PRIV_HOSTNAME=racnode1-priv PUBLIC_IP=172.16.1.150 PUBLIC_HOSTNAME=racnode1 SCAN_NAME=racnode-scan SCAN_IP=172.16.1.70 OP_TYPE=INSTALL DOMAIN=example.com ASM_DEVICE_LIST=/dev/asm_disk1,/dev/asm_disk2 ASM_DISCOVERY_DIR=/dev CMAN_HOSTNAME=racnode-cman1 CMAN_IP=172.16.1.15 COMMON_OS_PWD_FILE=common_os_pwdfile.enc PWD_KEY=pwd.key SETUP_LINUX_FILE=setupLinuxEnv.sh INSTALL_DIR=/opt/scripts GRID_BASE=/export/app/grid GRID_HOME=/export/app/12.2.0/grid INSTALL_FILE_1=linuxx64_12201_grid_home.zip GRID_INSTALL_RSP=grid.rsp GRID_SETUP_FILE=setupGrid.sh GRID_HOME_CLEANUP=GridHomeCleanup.sh FIXUP_PREQ_FILE=fixupPreq.sh INSTALL_GRID_BINARIES_FILE=installGridBinaries.sh INSTALL_GRID_PATCH=applyGridPatch.sh INVENTORY=/export/app/oraInventory CONFIGGRID=configGrid.sh ADDNODE=AddNode.sh DELNODE=DelNode.sh ADDNODE_RSP=grid_addnode.rsp SETUPSSH=setupSSH.expect GRID_PATCH=p27383741_122010_Linux-x86-64.zip PATCH_NUMBER=27383741 SETUPDOCKERORACLEINIT=setupdockeroracleinit.sh DOCKERORACLEINIT=dockeroracleinit GRID_USER_HOME=/home/grid SETUPGRIDENV=setupGridEnv.sh DB_BASE=/export/app/oracle DB_HOME=/export/app/oracle/product/12.2.0/dbhome_1 INSTALL_FILE_2=linuxx64_12201_database.zip DB_INSTALL_RSP=db_inst.rsp DBCA_RSP=dbca.rsp DB_SETUP_FILE=setupDB.sh PWD_FILE=setPassword.sh RUN_FILE=runOracle.sh STOP_FILE=stopOracle.sh ENABLE_RAC_FILE=enableRAC.sh CHECK_DB_FILE=checkDBStatus.sh USER_SCRIPTS_FILE=runUserScripts.sh REMOTE_LISTENER_FILE=remoteListener.sh INSTALL_DB_BINARIES_FILE=installDBBinaries.sh RESET_OS_PASSWORD=resetOSPassword.sh MULTI_NODE_INSTALL=MultiNodeInstall.py ORACLE_HOME_CLEANUP=OracleHomeCleanup.sh FUNCTIONS=functions.sh COMMON_SCRIPTS=/common_scripts CHECK_SPACE_FILE=checkSpace.sh EXPECT=/usr/bin/expect BIN=/usr/sbin container=true INSTALL_SCRIPTS=/opt/scripts/install SCRIPT_DIR=/opt/scripts/startup GRID_PATH=/export/app/12.2.0/grid/bin:/export/app/12.2.0/grid/OPatch/:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin DB_PATH=/export/app/oracle/product/12.2.0/dbhome_1/bin:/export/app/oracle/product/12.2.0/dbhome_1/OPatch/:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin GRID_LD_LIBRARY_PATH=/export/app/12.2.0/grid/lib:/usr/lib:/lib DB_LD_LIBRARY_PATH=/export/app/oracle/product/12.2.0/dbhome_1/lib:/usr/lib:/lib HOME=/home/grid systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN) Detected virtualization other. Detected architecture x86-64.

Welcome to Oracle Linux Server 7.6!

Set hostname to . Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory /usr/lib/systemd/system-generators/systemd-fstab-generator failed with error code 1. Binding to IPv6 address not available since kernel does not support IPv6. Binding to IPv6 address not available since kernel does not support IPv6. Cannot add dependency job for unit display-manager.service, ignoring: Unit not found. [ OK ] Reached target RPC Port Mapper. [ OK ] Created slice Root Slice. [ OK ] Listening on Delayed Shutdown Socket. [ OK ] Listening on /dev/initctl Compatibility Named Pipe. [ OK ] Started Forward Password Requests to Wall Directory Watch. [ OK ] Created slice System Slice. [ OK ] Created slice User and Session Slice. [ OK ] Started Dispatch Password Requests to Console Directory Watch. [ OK ] Reached target Slices. [ OK ] Reached target Local Encrypted Volumes. [ OK ] Reached target Swap. [ OK ] Created slice system-getty.slice. [ OK ] Listening on Journal Socket. [ OK ] Reached target Local File Systems (Pre). Starting Journal Service... Starting Read and set NIS domainname from /etc/sysconfig/network... Starting Rebuild Hardware Database... Starting Configure read-only root support... Couldn't determine result for ConditionKernelCommandLine=|rd.modules-load for systemd-modules-load.service, assuming failed: No such file or directory Couldn't determine result for ConditionKernelCommandLine=|modules-load for systemd-modules-load.service, assuming failed: No such file or directory [ OK ] Started Journal Service. Starting Flush Journal to Persistent Storage... [ OK ] Started Flush Journal to Persistent Storage. [ OK ] Started Read and set NIS domainname from /etc/sysconfig/network. [ OK ] Started Configure read-only root support. Starting Load/Save Random Seed... [ OK ] Reached target Local File Systems. Starting Mark the need to relabel after reboot... Starting Rebuild Journal Catalog... Starting Preprocess NFS configuration... Starting Create Volatile Files and Directories... [ OK ] Started Load/Save Random Seed. [ OK ] Started Mark the need to relabel after reboot. [ OK ] Started Rebuild Journal Catalog. [ OK ] Started Preprocess NFS configuration. [ OK ] Started Create Volatile Files and Directories. Mounting RPC Pipe File System... Starting Update UTMP about System Boot/Shutdown... [FAILED] Failed to mount RPC Pipe File System. See 'systemctl status var-lib-nfs-rpc_pipefs.mount' for details. [DEPEND] Dependency failed for rpc_pipefs.target. [DEPEND] Dependency failed for RPC security service for NFS client and server. [ OK ] Started Update UTMP about System Boot/Shutdown. [ OK ] Started Rebuild Hardware Database. Starting Update is Completed... [ OK ] Started Update is Completed. [ OK ] Reached target System Initialization. [ OK ] Started Flexible branding. [ OK ] Reached target Paths. [ OK ] Listening on RPCbind Server Activation Socket. Starting RPC bind service... [ OK ] Listening on D-Bus System Message Bus Socket. [ OK ] Reached target Sockets. [ OK ] Reached target Basic System. Starting LSB: Bring up/down networking... Starting Login Service... [ OK ] Started D-Bus System Message Bus. Starting Resets System Activity Logs... Starting Self Monitoring and Reporting Technology (SMART) Daemon... Starting OpenSSH Server Key Generation... Starting GSSAPI Proxy Daemon... [ OK ] Started Daily Cleanup of Temporary Directories. [ OK ] Reached target Timers. [ OK ] Started RPC bind service. Starting Cleanup of Temporary Directories... [ OK ] Started Login Service. [ OK ] Started Cleanup of Temporary Directories. [ OK ] Started Resets System Activity Logs. [ OK ] Started GSSAPI Proxy Daemon. [ OK ] Reached target NFS client services. [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. Starting Permit User Sessions... [ OK ] Started Permit User Sessions. [ OK ] Started Command Scheduler. [ OK ] Started OpenSSH Server Key Generation. [ OK ] Started LSB: Bring up/down networking. [ OK ] Reached target Network. Starting /etc/rc.d/rc.local Compatibility... Starting OpenSSH server daemon... [ OK ] Reached target Network is Online. Starting Notify NFS peers of a restart... [ OK ] Started Notify NFS peers of a restart. [ OK ] Started /etc/rc.d/rc.local Compatibility. [ OK ] Started Console Getty. [ OK ] Reached target Login Prompts. [ OK ] Started OpenSSH server daemon. 08-26-2019 04:56:11 UTC : : Process id of the program : 08-26-2019 04:56:11 UTC : : ################################################# 08-26-2019 04:56:11 UTC : : Starting Grid Installation
08-26-2019 04:56:11 UTC : : ################################################# 08-26-2019 04:56:11 UTC : : Pre-Grid Setup steps are in process 08-26-2019 04:56:11 UTC : : Process id of the program : 08-26-2019 04:56:11 UTC : : Disable failed service var-lib-nfs-rpc_pipefs.mount Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory 08-26-2019 04:56:11 UTC : : Resetting Failed Services 08-26-2019 04:56:11 UTC : : Sleeping for 60 seconds [ OK ] Started Self Monitoring and Reporting Technology (SMART) Daemon. [ OK ] Reached target Multi-User System. [ OK ] Reached target Graphical Interface. Starting Update UTMP about System Runlevel Changes... [ OK ] Started Update UTMP about System Runlevel Changes.

Oracle Linux Server 7.6 Kernel 4.1.12-124.25.1.el7uek.x86_64 on an x86_64

racnode1 login: 08-26-2019 04:57:11 UTC : : Systemctl state is running! 08-26-2019 04:57:11 UTC : : Setting correct permissions for /bin/ping 08-26-2019 04:57:11 UTC : : Public IP is set to 172.16.1.150 08-26-2019 04:57:11 UTC : : RAC Node PUBLIC Hostname is set to racnode1 08-26-2019 04:57:11 UTC : : racnode1 already exists : 172.16.1.150 racnode1.example.com racnode1 192.168.17.150 racnode1-priv.example.com racnode1-priv 172.16.1.160 racnode1-vip.example.com racnode1-vip, no update required 08-26-2019 04:57:11 UTC : : racnode1-priv already exists : 192.168.17.150 racnode1-priv.example.com racnode1-priv, no update required 08-26-2019 04:57:11 UTC : : racnode1-vip already exists : 172.16.1.160 racnode1-vip.example.com racnode1-vip, no update required 08-26-2019 04:57:11 UTC : : racnode-scan already exists : 172.16.1.70 racnode-scan.example.com racnode-scan, no update required 08-26-2019 04:57:11 UTC : : racnode-cman1 already exists : 172.16.1.15 racnode-cman1.example.com racnode-cman1, no update required 08-26-2019 04:57:11 UTC : : Preapring Device list 08-26-2019 04:57:11 UTC : : Changing Disk permission and ownership /dev/asm_disk1 08-26-2019 04:57:11 UTC : : Changing Disk permission and ownership /dev/asm_disk2 08-26-2019 04:57:11 UTC : : ##################################################################### 08-26-2019 04:57:11 UTC : : RAC setup will begin in 2 minutes
08-26-2019 04:57:11 UTC : : #################################################################### 08-26-2019 04:57:13 UTC : : ################################################### 08-26-2019 04:57:13 UTC : : Pre-Grid Setup steps completed 08-26-2019 04:57:13 UTC : : ################################################### 08-26-2019 04:57:13 UTC : : Checking if grid is already configured 08-26-2019 04:57:13 UTC : : Process id of the program : 08-26-2019 04:57:13 UTC : : Public IP is set to 172.16.1.150 08-26-2019 04:57:13 UTC : : RAC Node PUBLIC Hostname is set to racnode1 08-26-2019 04:57:13 UTC : : Domain is defined to example.com 08-26-2019 04:57:13 UTC : : Default setting of AUTO GNS VIP set to false. If you want to use AUTO GNS VIP, please pass DHCP_CONF as an env parameter set to true 08-26-2019 04:57:13 UTC : : RAC VIP set to 172.16.1.160 08-26-2019 04:57:13 UTC : : RAC Node VIP hostname is set to racnode1-vip 08-26-2019 04:57:13 UTC : : SCAN_NAME name is racnode-scan 08-26-2019 04:57:13 UTC : : SCAN PORT is set to empty string. Setting it to 1521 port. 08-26-2019 04:57:33 UTC : : 172.16.1.70 08-26-2019 04:57:33 UTC : : SCAN Name resolving to IP. Check Passed! 08-26-2019 04:57:33 UTC : : SCAN_IP name is 172.16.1.70 08-26-2019 04:57:33 UTC : : RAC Node PRIV IP is set to 192.168.17.150 08-26-2019 04:57:33 UTC : : RAC Node private hostname is set to racnode1-priv 08-26-2019 04:57:33 UTC : : CMAN_HOSTNAME name is racnode-cman1 08-26-2019 04:57:33 UTC : : CMAN_IP name is 172.16.1.15 08-26-2019 04:57:33 UTC : : Cluster Name is not defined 08-26-2019 04:57:33 UTC : : Cluster name is set to 'racnode-c' 08-26-2019 04:57:33 UTC : : Password file generated 08-26-2019 04:57:33 UTC : : Common OS Password string is set for Grid user 08-26-2019 04:57:33 UTC : : Common OS Password string is set for Oracle user 08-26-2019 04:57:33 UTC : : Common OS Password string is set for Oracle Database 08-26-2019 04:57:33 UTC : : Setting CONFIGURE_GNS to false 08-26-2019 04:57:33 UTC : : GRID_RESPONSE_FILE env variable set to empty. configGrid.sh will use standard cluster responsefile 08-26-2019 04:57:33 UTC : : Location for User script SCRIPT_ROOT set to /common_scripts 08-26-2019 04:57:33 UTC : : IGNORE_CVU_CHECKS is set to true 08-26-2019 04:57:33 UTC : : Oracle SID is set to ORCLCDB 08-26-2019 04:57:33 UTC : : Oracle PDB name is set to ORCLPDB 08-26-2019 04:57:33 UTC : : Check passed for network card eth1 for public IP 172.16.1.150 08-26-2019 04:57:33 UTC : : Public Netmask : 255.255.255.0 08-26-2019 04:57:33 UTC : : Check passed for network card eth0 for private IP 192.168.17.150 08-26-2019 04:57:33 UTC : : Building NETWORK_STRING to set networkInterfaceList in Grid Response File 08-26-2019 04:57:33 UTC : : Network InterfaceList set to eth1:172.16.1.0:1,eth0:192.168.17.0:5 08-26-2019 04:57:33 UTC : : Setting random password for grid user 08-26-2019 04:57:33 UTC : : Setting random password for oracle user 08-26-2019 04:57:33 UTC : : Calling setupSSH function 08-26-2019 04:57:33 UTC : : SSh will be setup among racnode1 nodes 08-26-2019 04:57:33 UTC : : Running SSH setup for grid user between nodes racnode1 08-26-2019 04:58:09 UTC : : Running SSH setup for oracle user between nodes racnode1 08-26-2019 04:58:15 UTC : : SSH check fine for the racnode1 08-26-2019 04:58:15 UTC : : SSH check fine for the oracle@racnode1 08-26-2019 04:58:15 UTC : : Preapring Device list 08-26-2019 04:58:15 UTC : : Changing Disk permission and ownership 08-26-2019 04:58:15 UTC : : Changing Disk permission and ownership 08-26-2019 04:58:15 UTC : : ASM Disk size : 0 08-26-2019 04:58:15 UTC : : ASM Device list will be with failure groups /dev/asm_disk1,,/dev/asm_disk2, 08-26-2019 04:58:15 UTC : : ASM Device list will be groups /dev/asm_disk1,/dev/asm_disk2 08-26-2019 04:58:15 UTC : : CLUSTER_TYPE env variable is set to STANDALONE, will not process GIMR DEVICE list as default Diskgroup is set to DATA. GIMR DEVICE List will be processed when CLUSTER_TYPE is set to DOMAIN for DSC 08-26-2019 04:58:15 UTC : : Nodes in the cluster racnode1 08-26-2019 04:58:15 UTC : : Setting Device permissions for RAC Install on racnode1 08-26-2019 04:58:15 UTC : : Preapring ASM Device list 08-26-2019 04:58:15 UTC : : Changing Disk permission and ownership 08-26-2019 04:58:15 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode1 08-26-2019 04:58:15 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode1 08-26-2019 04:58:15 UTC : : Populate Rac Env Vars on Remote Hosts 08-26-2019 04:58:15 UTC : : Command : su - $GRID_USER -c "ssh $node sudo echo \"export ASM_DEVICE_LIST=${ASM_DEVICE_LIST}\" >> /etc/rac_env_vars" execute on racnode1 08-26-2019 04:58:15 UTC : : Changing Disk permission and ownership 08-26-2019 04:58:15 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode1 08-26-2019 04:58:16 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode1 08-26-2019 04:58:16 UTC : : Populate Rac Env Vars on Remote Hosts 08-26-2019 04:58:16 UTC : : Command : su - $GRID_USER -c "ssh $node sudo echo \"export ASM_DEVICE_LIST=${ASM_DEVICE_LIST}\" >> /etc/rac_env_vars" execute on racnode1 08-26-2019 04:58:16 UTC : : Generating Reponsefile 08-26-2019 04:58:16 UTC : : Running cluvfy Checks 08-26-2019 04:58:16 UTC : : Performing Cluvfy Checks 08-26-2019 04:59:29 UTC : : Checking /tmp/cluvfy_check.txt if there is any failed check.

ERROR: PRVG-10467 : The default Oracle Inventory group could not be determined.

Verifying Physical Memory ...PASSED Verifying Available Physical Memory ...PASSED Verifying Swap Size ...PASSED Verifying Free Space: racnode1:/usr,racnode1:/var,racnode1:/etc,racnode1:/sbin,racnode1:/tmp,racnode1:/export/app/grid ...PASSED Verifying User Existence: grid ... Verifying Users With Same UID: 54332 ...PASSED Verifying User Existence: grid ...PASSED Verifying Group Existence: asmadmin ...PASSED Verifying Group Existence: dba ...PASSED Verifying Group Membership: dba ...PASSED Verifying Group Membership: asmadmin ...PASSED Verifying Run Level ...PASSED Verifying Hard Limit: maximum open file descriptors ...PASSED Verifying Soft Limit: maximum open file descriptors ...PASSED Verifying Hard Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum stack size ...PASSED Verifying Architecture ...PASSED Verifying OS Kernel Version ...PASSED Verifying OS Kernel Parameter: semmsl ...PASSED Verifying OS Kernel Parameter: semmns ...PASSED Verifying OS Kernel Parameter: semopm ...PASSED Verifying OS Kernel Parameter: semmni ...PASSED Verifying OS Kernel Parameter: shmmax ...PASSED Verifying OS Kernel Parameter: shmmni ...PASSED Verifying OS Kernel Parameter: shmall ...PASSED Verifying OS Kernel Parameter: file-max ...PASSED Verifying OS Kernel Parameter: aio-max-nr ...PASSED Verifying OS Kernel Parameter: panic_on_oops ...PASSED Verifying Package: binutils-2.23.52.0.1 ...PASSED Verifying Package: compat-libcap1-1.10 ...PASSED Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED Verifying Package: sysstat-10.1.5 ...PASSED Verifying Package: ksh ...PASSED Verifying Package: make-3.82 ...PASSED Verifying Package: glibc-2.17 (x86_64) ...PASSED Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED Verifying Package: libaio-0.3.109 (x86_64) ...PASSED Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED Verifying Package: nfs-utils-1.2.3-15 ...PASSED Verifying Package: smartmontools-6.2-4 ...PASSED Verifying Package: net-tools-2.0-0.17 ...PASSED Verifying Port Availability for component "Oracle Remote Method Invocation (ORMI)" ...PASSED Verifying Port Availability for component "Oracle Notification Service (ONS)" ...PASSED Verifying Port Availability for component "Oracle Cluster Synchronization Services (CSSD)" ...PASSED Verifying Port Availability for component "Oracle Notification Service (ONS) Enterprise Manager support" ...PASSED Verifying Port Availability for component "Oracle Database Listener" ...PASSED Verifying Users With Same UID: 0 ...PASSED Verifying Current Group ID ...PASSED Verifying Root user consistency ...PASSED Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying Node Connectivity ...PASSED Verifying Multicast check ...PASSED Verifying ASM Integrity ... Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying Node Connectivity ...PASSED Verifying ASM Integrity ...PASSED Verifying Device Checks for ASM ... Verifying ASM device sharedness check ... Verifying Package: cvuqdisk-1.0.10-1 ...PASSED Verifying Shared Storage Accessibility:/dev/asm_disk2,/dev/asm_disk1 ...WARNING (PRVG-1615) Verifying ASM device sharedness check ...WARNING (PRVG-1615) Verifying Access Control List check ...PASSED Verifying Device Checks for ASM ...WARNING (PRVG-1615) Verifying I/O scheduler ... Verifying Package: cvuqdisk-1.0.10-1 ...PASSED Verifying I/O scheduler ...PASSED Verifying Network Time Protocol (NTP) ... Verifying '/etc/ntp.conf' ...PASSED Verifying '/var/run/ntpd.pid' ...PASSED Verifying '/var/run/chronyd.pid' ...PASSED Verifying Network Time Protocol (NTP) ...FAILED Verifying Same core file name pattern ...PASSED Verifying User Mask ...PASSED Verifying User Not In Group "root": grid ...PASSED Verifying Time zone consistency ...PASSED Verifying VIP Subnet configuration check ...PASSED Verifying resolv.conf Integrity ... Verifying (Linux) resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying DNS/NIS name service ... Verifying Name Service Switch Configuration File Integrity ...PASSED Verifying DNS/NIS name service ...FAILED (PRVG-1101) Verifying Single Client Access Name (SCAN) ...PASSED Verifying Domain Sockets ...PASSED Verifying /boot mount ...PASSED Verifying Daemon "avahi-daemon" not configured and running ...PASSED Verifying Daemon "proxyt" not configured and running ...PASSED Verifying loopback network interface address ...PASSED Verifying Oracle base: /export/app/grid ... Verifying '/export/app/grid' ...PASSED Verifying Oracle base: /export/app/grid ...PASSED Verifying User Equivalence ...PASSED Verifying Network interface bonding status of private interconnect network interfaces ...PASSED Verifying File system mount options for path /var ...PASSED Verifying zeroconf check ...PASSED Verifying ASM Filter Driver configuration ...PASSED

Pre-check for cluster services setup was unsuccessful on all the nodes.

Warnings were encountered during execution of CVU verification request "stage -pre crsinst".

Verifying Device Checks for ASM ...WARNING Verifying ASM device sharedness check ...WARNING Verifying Shared Storage Accessibility:/dev/asm_disk2,/dev/asm_disk1 ...WARNING PRVG-1615 : Virtual environment detected. Skipping shared storage check for disks "/dev/asm_disk2,/dev/asm_disk1".

Verifying Network Time Protocol (NTP) ...FAILED Verifying resolv.conf Integrity ...FAILED racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11". racnode1: Check for integrity of file "/etc/resolv.conf" failed

Verifying (Linux) resolv.conf Integrity ...FAILED racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11".

Verifying DNS/NIS name service ...FAILED PRVG-1101 : SCAN name "racnode-scan" failed to resolve

CVU operation performed: stage -pre crsinst Date: Aug 26, 2019 4:58:22 AM CVU home: /export/app/12.2.0/grid/ User: grid 08-26-2019 04:59:29 UTC : : CVU Checks are ignored as IGNORE_CVU_CHECKS set to true. It is recommended to set IGNORE_CVU_CHECKS to false and meet all the cvu checks requirement. RAC installation might fail, if there are failed cvu checks. 08-26-2019 04:59:29 UTC : : Running Grid Installation 08-26-2019 04:59:46 UTC : : Running root.sh 08-26-2019 04:59:46 UTC : : Nodes in the cluster racnode1 08-26-2019 04:59:46 UTC : : Running root.sh on racnode1 08-26-2019 04:59:46 UTC : : Running post root.sh steps 08-26-2019 04:59:46 UTC : : Running post root.sh steps to setup Grid env 08-26-2019 04:59:52 UTC : : Checking Cluster Status 08-26-2019 04:59:52 UTC : : Nodes in the cluster 08-26-2019 04:59:52 UTC : : Removing /tmp/cluvfy_check.txt as cluster check has passed 08-26-2019 04:59:52 UTC : : Generating DB Responsefile Running DB creation 08-26-2019 04:59:52 UTC : : Running DB creation 08-26-2019 04:59:56 UTC : : Checking DB status 08-26-2019 04:59:56 UTC : : ORCLCDB is not up and running on racnode1 08-26-2019 04:59:56 UTC : : Error has occurred in Grid Setup, Please verify! 08-26-2019 05:03:11 UTC : resetOSPassword.sh : --help -- 08-26-2019 05:03:11 UTC : resetOSPassword.sh: Please specify correct parameters 08-26-2019 05:03:21 UTC : resetOSPassword.sh : --op_type 'reset_grid_oracle' --pwd_file 'common_os_pwdfile.enc' --secret_volume '/run/secrets' --pwd_key_file 'pwd.key' -- 08-26-2019 05:03:21 UTC : resetOSPassword.sh : RESET_PASSWORD_TYPE=reset_grid_oracle 08-26-2019 05:03:21 UTC : resetOSPassword.sh : PWD_FILE: common_os_pwdfile.enc 08-26-2019 05:03:21 UTC : resetOSPassword.sh : PWD_KEY=pwd.key 08-26-2019 05:03:21 UTC : resetOSPassword.sh : generating node name from the cluster 08-26-2019 05:03:21 UTC : resetOSPassword.sh : Generating password for grid and oracle user 08-26-2019 05:03:21 UTC : resetOSPassword.sh : Password file generated 08-26-2019 05:03:21 UTC : resetOSPassword.sh : Setting password for grid user 08-26-2019 05:03:21 UTC : resetOSPassword.sh : Resetting password for grid on the racnode1 08-26-2019 05:03:21 UTC : resetOSPassword.sh : Password reset seucessfuly on racnode1 for grid 08-26-2019 05:03:21 UTC : resetOSPassword.sh : Setting password for oracle user 08-26-2019 05:03:21 UTC : resetOSPassword.sh : Resetting password for oracle on the racnode1 08-26-2019 05:03:21 UTC : resetOSPassword.sh : Password reset seucessfuly on racnode1 for oracle 08-26-2019 05:14:02 UTC : : Process id of the program : 08-26-2019 05:14:02 UTC : : ################################################# 08-26-2019 05:14:02 UTC : : Starting Grid Installation
08-26-2019 05:14:02 UTC : : ################################################# 08-26-2019 05:14:02 UTC : : Pre-Grid Setup steps are in process 08-26-2019 05:14:02 UTC : : Process id of the program : 08-26-2019 05:14:02 UTC : : Sleeping for 60 seconds 08-26-2019 05:15:02 UTC : : Systemctl state is running! 08-26-2019 05:15:02 UTC : : Setting correct permissions for /bin/ping 08-26-2019 05:15:02 UTC : : Public IP is set to 172.16.1.150 08-26-2019 05:15:02 UTC : : RAC Node PUBLIC Hostname is set to racnode1 08-26-2019 05:15:02 UTC : : racnode1 already exists : 172.16.1.150 racnode1.example.com racnode1 192.168.17.150 racnode1-priv.example.com racnode1-priv 172.16.1.160 racnode1-vip.example.com racnode1-vip, no update required 08-26-2019 05:15:02 UTC : : racnode1-priv already exists : 192.168.17.150 racnode1-priv.example.com racnode1-priv, no update required 08-26-2019 05:15:02 UTC : : racnode1-vip already exists : 172.16.1.160 racnode1-vip.example.com racnode1-vip, no update required 08-26-2019 05:15:02 UTC : : racnode-scan already exists : 172.16.1.70 racnode-scan.example.com racnode-scan, no update required 08-26-2019 05:15:02 UTC : : racnode-cman1 already exists : 172.16.1.15 racnode-cman1.example.com racnode-cman1, no update required 08-26-2019 05:15:02 UTC : : Preapring Device list 08-26-2019 05:15:02 UTC : : Changing Disk permission and ownership /dev/asm_disk1 08-26-2019 05:15:02 UTC : : Changing Disk permission and ownership /dev/asm_disk2 08-26-2019 05:15:02 UTC : : ##################################################################### 08-26-2019 05:15:02 UTC : : RAC setup will begin in 2 minutes
08-26-2019 05:15:02 UTC : : #################################################################### 08-26-2019 05:15:04 UTC : : ################################################### 08-26-2019 05:15:04 UTC : : Pre-Grid Setup steps completed 08-26-2019 05:15:04 UTC : : ################################################### 08-26-2019 05:15:04 UTC : : Checking if grid is already configured 08-26-2019 05:15:04 UTC : : Process id of the program : 08-26-2019 05:15:04 UTC : : Public IP is set to 172.16.1.150 08-26-2019 05:15:04 UTC : : RAC Node PUBLIC Hostname is set to racnode1 08-26-2019 05:15:04 UTC : : Domain is defined to example.com 08-26-2019 05:15:04 UTC : : Default setting of AUTO GNS VIP set to false. If you want to use AUTO GNS VIP, please pass DHCP_CONF as an env parameter set to true 08-26-2019 05:15:04 UTC : : RAC VIP set to 172.16.1.160 08-26-2019 05:15:04 UTC : : RAC Node VIP hostname is set to racnode1-vip 08-26-2019 05:15:04 UTC : : SCAN_NAME name is racnode-scan 08-26-2019 05:15:04 UTC : : SCAN PORT is set to empty string. Setting it to 1521 port. 08-26-2019 05:15:24 UTC : : 172.16.1.70 08-26-2019 05:15:24 UTC : : SCAN Name resolving to IP. Check Passed! 08-26-2019 05:15:24 UTC : : SCAN_IP name is 172.16.1.70 08-26-2019 05:15:24 UTC : : RAC Node PRIV IP is set to 192.168.17.150 08-26-2019 05:15:24 UTC : : RAC Node private hostname is set to racnode1-priv 08-26-2019 05:15:24 UTC : : CMAN_HOSTNAME name is racnode-cman1 08-26-2019 05:15:24 UTC : : CMAN_IP name is 172.16.1.15 08-26-2019 05:15:24 UTC : : Cluster Name is not defined 08-26-2019 05:15:24 UTC : : Cluster name is set to 'racnode-c' 08-26-2019 05:15:24 UTC : : Password file generated 08-26-2019 05:15:24 UTC : : Common OS Password string is set for Grid user 08-26-2019 05:15:24 UTC : : Common OS Password string is set for Oracle user 08-26-2019 05:15:24 UTC : : Common OS Password string is set for Oracle Database 08-26-2019 05:15:24 UTC : : Setting CONFIGURE_GNS to false 08-26-2019 05:15:24 UTC : : GRID_RESPONSE_FILE env variable set to empty. configGrid.sh will use standard cluster responsefile 08-26-2019 05:15:24 UTC : : Location for User script SCRIPT_ROOT set to /common_scripts 08-26-2019 05:15:24 UTC : : IGNORE_CVU_CHECKS is set to true 08-26-2019 05:15:24 UTC : : Oracle SID is set to ORCLCDB 08-26-2019 05:15:24 UTC : : Oracle PDB name is set to ORCLPDB 08-26-2019 05:15:24 UTC : : Check passed for network card eth1 for public IP 172.16.1.150 08-26-2019 05:15:24 UTC : : Public Netmask : 255.255.255.0 08-26-2019 05:15:24 UTC : : Check passed for network card eth0 for private IP 192.168.17.150 08-26-2019 05:15:24 UTC : : Building NETWORK_STRING to set networkInterfaceList in Grid Response File 08-26-2019 05:15:24 UTC : : Network InterfaceList set to eth1:172.16.1.0:1,eth0:192.168.17.0:5 08-26-2019 05:15:24 UTC : : Setting random password for grid user 08-26-2019 05:15:24 UTC : : Setting random password for oracle user 08-26-2019 05:15:24 UTC : : Calling setupSSH function 08-26-2019 05:15:24 UTC : : SSh will be setup among racnode1 nodes 08-26-2019 05:15:24 UTC : : Running SSH setup for grid user between nodes racnode1 08-26-2019 05:16:00 UTC : : Running SSH setup for oracle user between nodes racnode1 08-26-2019 05:16:06 UTC : : SSH check fine for the racnode1 08-26-2019 05:16:06 UTC : : SSH check fine for the oracle@racnode1 08-26-2019 05:16:06 UTC : : Preapring Device list 08-26-2019 05:16:06 UTC : : Changing Disk permission and ownership 08-26-2019 05:16:06 UTC : : Changing Disk permission and ownership 08-26-2019 05:16:06 UTC : : ASM Disk size : 0 08-26-2019 05:16:06 UTC : : ASM Device list will be with failure groups /dev/asm_disk1,,/dev/asm_disk2, 08-26-2019 05:16:06 UTC : : ASM Device list will be groups /dev/asm_disk1,/dev/asm_disk2 08-26-2019 05:16:06 UTC : : CLUSTER_TYPE env variable is set to STANDALONE, will not process GIMR DEVICE list as default Diskgroup is set to DATA. GIMR DEVICE List will be processed when CLUSTER_TYPE is set to DOMAIN for DSC 08-26-2019 05:16:06 UTC : : Nodes in the cluster racnode1 08-26-2019 05:16:06 UTC : : Setting Device permissions for RAC Install on racnode1 08-26-2019 05:16:06 UTC : : Preapring ASM Device list 08-26-2019 05:16:06 UTC : : Changing Disk permission and ownership 08-26-2019 05:16:06 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode1 08-26-2019 05:16:06 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode1 08-26-2019 05:16:06 UTC : : Populate Rac Env Vars on Remote Hosts 08-26-2019 05:16:06 UTC : : Command : su - $GRID_USER -c "ssh $node sudo echo \"export ASM_DEVICE_LIST=${ASM_DEVICE_LIST}\" >> /etc/rac_env_vars" execute on racnode1 08-26-2019 05:16:06 UTC : : Changing Disk permission and ownership 08-26-2019 05:16:06 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode1 08-26-2019 05:16:06 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode1 08-26-2019 05:16:06 UTC : : Populate Rac Env Vars on Remote Hosts 08-26-2019 05:16:06 UTC : : Command : su - $GRID_USER -c "ssh $node sudo echo \"export ASM_DEVICE_LIST=${ASM_DEVICE_LIST}\" >> /etc/rac_env_vars" execute on racnode1 08-26-2019 05:16:06 UTC : : Generating Reponsefile 08-26-2019 05:16:07 UTC : : Running cluvfy Checks 08-26-2019 05:16:07 UTC : : Performing Cluvfy Checks 08-26-2019 05:17:09 UTC : : Checking /tmp/cluvfy_check.txt if there is any failed check.

Verifying Physical Memory ...PASSED Verifying Available Physical Memory ...PASSED Verifying Swap Size ...PASSED Verifying Free Space: racnode1:/usr,racnode1:/var,racnode1:/etc,racnode1:/sbin,racnode1:/tmp,racnode1:/export/app/grid ...PASSED Verifying User Existence: grid ... Verifying Users With Same UID: 54332 ...PASSED Verifying User Existence: grid ...PASSED Verifying Group Existence: asmadmin ...PASSED Verifying Group Existence: dba ...PASSED Verifying Group Existence: oinstall ...PASSED Verifying Group Membership: dba ...PASSED Verifying Group Membership: asmadmin ...PASSED Verifying Group Membership: oinstall(Primary) ...PASSED Verifying Run Level ...PASSED Verifying Hard Limit: maximum open file descriptors ...PASSED Verifying Soft Limit: maximum open file descriptors ...PASSED Verifying Hard Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum stack size ...PASSED Verifying Architecture ...PASSED Verifying OS Kernel Version ...PASSED Verifying OS Kernel Parameter: semmsl ...PASSED Verifying OS Kernel Parameter: semmns ...PASSED Verifying OS Kernel Parameter: semopm ...PASSED Verifying OS Kernel Parameter: semmni ...PASSED Verifying OS Kernel Parameter: shmmax ...PASSED Verifying OS Kernel Parameter: shmmni ...PASSED Verifying OS Kernel Parameter: shmall ...PASSED Verifying OS Kernel Parameter: file-max ...PASSED Verifying OS Kernel Parameter: aio-max-nr ...PASSED Verifying OS Kernel Parameter: panic_on_oops ...PASSED Verifying Package: binutils-2.23.52.0.1 ...PASSED Verifying Package: compat-libcap1-1.10 ...PASSED Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED Verifying Package: sysstat-10.1.5 ...PASSED Verifying Package: ksh ...PASSED Verifying Package: make-3.82 ...PASSED Verifying Package: glibc-2.17 (x86_64) ...PASSED Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED Verifying Package: libaio-0.3.109 (x86_64) ...PASSED Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED Verifying Package: nfs-utils-1.2.3-15 ...PASSED Verifying Package: smartmontools-6.2-4 ...PASSED Verifying Package: net-tools-2.0-0.17 ...PASSED Verifying Port Availability for component "Oracle Remote Method Invocation (ORMI)" ...PASSED Verifying Port Availability for component "Oracle Notification Service (ONS)" ...PASSED Verifying Port Availability for component "Oracle Cluster Synchronization Services (CSSD)" ...PASSED Verifying Port Availability for component "Oracle Notification Service (ONS) Enterprise Manager support" ...PASSED Verifying Port Availability for component "Oracle Database Listener" ...PASSED Verifying Users With Same UID: 0 ...PASSED Verifying Current Group ID ...PASSED Verifying Root user consistency ...PASSED Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying Node Connectivity ...PASSED Verifying Multicast check ...PASSED Verifying ASM Integrity ... Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying Node Connectivity ...PASSED Verifying ASM Integrity ...PASSED Verifying Device Checks for ASM ... Verifying ASM device sharedness check ... Verifying Package: cvuqdisk-1.0.10-1 ...PASSED Verifying Shared Storage Accessibility:/dev/asm_disk2,/dev/asm_disk1 ...WARNING (PRVG-1615) Verifying ASM device sharedness check ...WARNING (PRVG-1615) Verifying Access Control List check ...PASSED Verifying Device Checks for ASM ...WARNING (PRVG-1615) Verifying I/O scheduler ... Verifying Package: cvuqdisk-1.0.10-1 ...PASSED Verifying I/O scheduler ...PASSED Verifying Network Time Protocol (NTP) ... Verifying '/etc/ntp.conf' ...PASSED Verifying '/var/run/ntpd.pid' ...PASSED Verifying '/var/run/chronyd.pid' ...PASSED Verifying Network Time Protocol (NTP) ...FAILED Verifying Same core file name pattern ...PASSED Verifying User Mask ...PASSED Verifying User Not In Group "root": grid ...PASSED Verifying Time zone consistency ...PASSED Verifying VIP Subnet configuration check ...PASSED Verifying resolv.conf Integrity ... Verifying (Linux) resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying DNS/NIS name service ... Verifying Name Service Switch Configuration File Integrity ...PASSED Verifying DNS/NIS name service ...FAILED (PRVG-1101) Verifying Single Client Access Name (SCAN) ...PASSED Verifying Domain Sockets ...PASSED Verifying /boot mount ...PASSED Verifying Daemon "avahi-daemon" not configured and running ...PASSED Verifying Daemon "proxyt" not configured and running ...PASSED Verifying loopback network interface address ...PASSED Verifying Oracle base: /export/app/grid ... Verifying '/export/app/grid' ...PASSED Verifying Oracle base: /export/app/grid ...PASSED Verifying User Equivalence ...PASSED Verifying Network interface bonding status of private interconnect network interfaces ...PASSED Verifying File system mount options for path /var ...PASSED Verifying zeroconf check ...PASSED Verifying ASM Filter Driver configuration ...PASSED

Pre-check for cluster services setup was unsuccessful on all the nodes.

Warnings were encountered during execution of CVU verification request "stage -pre crsinst".

Verifying Device Checks for ASM ...WARNING Verifying ASM device sharedness check ...WARNING Verifying Shared Storage Accessibility:/dev/asm_disk2,/dev/asm_disk1 ...WARNING PRVG-1615 : Virtual environment detected. Skipping shared storage check for disks "/dev/asm_disk2,/dev/asm_disk1".

Verifying Network Time Protocol (NTP) ...FAILED Verifying resolv.conf Integrity ...FAILED racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11". racnode1: Check for integrity of file "/etc/resolv.conf" failed

Verifying (Linux) resolv.conf Integrity ...FAILED racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11".

Verifying DNS/NIS name service ...FAILED PRVG-1101 : SCAN name "racnode-scan" failed to resolve

CVU operation performed: stage -pre crsinst Date: Aug 26, 2019 5:16:08 AM CVU home: /export/app/12.2.0/grid/ User: grid 08-26-2019 05:17:09 UTC : : CVU Checks are ignored as IGNORE_CVU_CHECKS set to true. It is recommended to set IGNORE_CVU_CHECKS to false and meet all the cvu checks requirement. RAC installation might fail, if there are failed cvu checks. 08-26-2019 05:17:09 UTC : : Running Grid Installation 08-26-2019 05:17:20 UTC : : Running root.sh 08-26-2019 05:17:20 UTC : : Nodes in the cluster racnode1 08-26-2019 05:17:20 UTC : : Running root.sh on racnode1 08-26-2019 05:17:20 UTC : : Running post root.sh steps 08-26-2019 05:17:20 UTC : : Running post root.sh steps to setup Grid env 08-26-2019 05:17:25 UTC : : Checking Cluster Status 08-26-2019 05:17:25 UTC : : Nodes in the cluster 08-26-2019 05:17:25 UTC : : Removing /tmp/cluvfy_check.txt as cluster check has passed 08-26-2019 05:17:25 UTC : : Generating DB Responsefile Running DB creation 08-26-2019 05:17:26 UTC : : Running DB creation 08-26-2019 05:17:29 UTC : : Checking DB status 08-26-2019 05:17:29 UTC : : ORCLCDB is not up and running on racnode1 08-26-2019 05:17:29 UTC : : Error has occurred in Grid Setup, Please verify!

babloo2642 commented 5 years ago

@psaini79

Please find the below output for details:

[grid@racnode1 ~]$ sudo /bin/bash bash-4.2# sh -x /opt/scripts/startup/runOracle.sh

--- racnode1.example.com ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 3998ms rtt min/avg/max/mdev = 0.026/0.036/0.043/0.008 ms Remote host reachability check succeeded. The following hosts are reachable: racnode1. The following hosts are not reachable: . All hosts are reachable. Proceeding further... firsthost racnode1 numhosts 1 The script will setup SSH connectivity from the host racnode1 to all the remote hosts. After the script is executed, the user can use SSH to run commands on the remote hosts or copy files between this host racnode1 and the remote hosts without being prompted for passwords or confirmations.

NOTE 1: As part of the setup procedure, this script will use ssh and scp to copy files between the local host and the remote hosts. Since the script does not store passwords, you may be prompted for the passwords during the execution of the script whenever ssh or scp is invoked.

NOTE 2: AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEGES TO THESE directories.

Do you want to continue and let the script make the above mentioned changes (yes/no)? Confirmation provided on the command line

The user chose yes User chose to skip passphrase related questions. Creating .ssh directory on local host, if not present already Creating authorized_keys file on local host Changing permissions on authorized_keys to 644 on local host Creating known_hosts file on local host Changing permissions on known_hosts to 644 on local host Creating config file on local host If a config file exists already at /home/grid/.ssh/config, it would be backed up to /home/grid/.ssh/config.backup. Creating .ssh directory and setting permissions on remote host racnode1 THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR grid. THIS IS AN SSH REQUIREMENT. The script would create ~grid/.ssh/config file on remote host racnode1. If a config file exists already at ~grid/.ssh/config, it would be backed up to ~grid/.ssh/config.backup. The user may be prompted for a password here since the script would be running SSH on host racnode1. Warning: Permanently added 'racnode1,172.16.1.150' (ECDSA) to the list of known hosts. grid@racnode1's password: Done with creating .ssh directory and setting permissions on remote host racnode1. Copying local host public key to the remote host racnode1 The user may be prompted for a password or passphrase here since the script would be using SCP for host racnode1. grid@racnode1's password: Done copying local host public key to the remote host racnode1 Creating keys on remote host racnode1 if they do not exist already. This is required to setup SSH on host racnode1.

Updating authorized_keys file on remote host racnode1 Updating known_hosts file on remote host racnode1 cat: /home/grid/.ssh/known_hosts.tmp: No such file or directory cat: /home/grid/.ssh/authorized_keys.tmp: No such file or directory SSH setup is complete.


Verifying SSH setup

The script will now run the date command on the remote nodes using ssh to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP, THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR PASSWORDS. If you see any output other than date or are prompted for the password, ssh is not setup correctly and you will need to resolve the issue and set up ssh again. The possible causes for failure could be:

  1. The server settings in /etc/ssh/sshd_config file do not allow ssh for user grid.
  2. The server may have disabled public key based authentication.
  3. The client public key on the server may be outdated.
  4. ~grid or ~grid/.ssh on the remote host may not be owned by grid.
  5. User may not have passed -shared option for shared remote users or may be passing the -shared option for non-shared remote users.
  6. If there is output in addition to the date, but no password is asked, it may be a security alert shown as part of company policy. Append the additional text to the /sysman/prov/resources/ignoreMessages.txt file.

    --racnode1:-- Running /usr/bin/ssh -x -l grid racnode1 date to verify SSH connectivity has been setup from local host to racnode1. IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR. Mon Aug 26 05:15:30 UTC 2019


    Verifying SSH connectivity has been setup from racnode1 to racnode1

    IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Mon Aug 26 05:15:30 UTC 2019

    -Verification from racnode1 complete- SSH verification complete. spawn /export/app/oracle/product/12.2.0/dbhome_1/oui/prov/resources/scripts/sshUserSetup.sh -user oracle -hosts racnode1 -logfile /tmp/oracle_SetupSSH.log -advanced -exverify -noPromptPassphrase -confirm The output of this script is also logged into /tmp/oracle_SetupSSH.log Hosts are racnode1 user is oracle Platform:- Linux Checking if the remote hosts are reachable PING racnode1.example.com (172.16.1.150) 56(84) bytes of data. 64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=1 ttl=64 time=0.026 ms 64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=2 ttl=64 time=0.039 ms 64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=3 ttl=64 time=0.038 ms 64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=4 ttl=64 time=0.037 ms 64 bytes from racnode1.example.com (172.16.1.150): icmp_seq=5 ttl=64 time=0.036 ms

--- racnode1.example.com ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 3999ms rtt min/avg/max/mdev = 0.026/0.035/0.039/0.006 ms Remote host reachability check succeeded. The following hosts are reachable: racnode1. The following hosts are not reachable: . All hosts are reachable. Proceeding further... firsthost racnode1 numhosts 1 The script will setup SSH connectivity from the host racnode1 to all the remote hosts. After the script is executed, the user can use SSH to run commands on the remote hosts or copy files between this host racnode1 and the remote hosts without being prompted for passwords or confirmations.

NOTE 1: As part of the setup procedure, this script will use ssh and scp to copy files between the local host and the remote hosts. Since the script does not store passwords, you may be prompted for the passwords during the execution of the script whenever ssh or scp is invoked.

NOTE 2: AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEGES TO THESE directories.

Do you want to continue and let the script make the above mentioned changes (yes/no)? Confirmation provided on the command line

The user chose yes User chose to skip passphrase related questions. Creating .ssh directory on local host, if not present already Creating authorized_keys file on local host Changing permissions on authorized_keys to 644 on local host Creating known_hosts file on local host Changing permissions on known_hosts to 644 on local host Creating config file on local host If a config file exists already at /home/oracle/.ssh/config, it would be backed up to /home/oracle/.ssh/config.backup. Creating .ssh directory and setting permissions on remote host racnode1 THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR oracle. THIS IS AN SSH REQUIREMENT. The script would create ~oracle/.ssh/config file on remote host racnode1. If a config file exists already at ~oracle/.ssh/config, it would be backed up to ~oracle/.ssh/config.backup. The user may be prompted for a password here since the script would be running SSH on host racnode1. Warning: Permanently added 'racnode1,172.16.1.150' (ECDSA) to the list of known hosts. oracle@racnode1's password: Done with creating .ssh directory and setting permissions on remote host racnode1. Copying local host public key to the remote host racnode1 The user may be prompted for a password or passphrase here since the script would be using SCP for host racnode1. oracle@racnode1's password: Done copying local host public key to the remote host racnode1 Creating keys on remote host racnode1 if they do not exist already. This is required to setup SSH on host racnode1.

Updating authorized_keys file on remote host racnode1 Updating known_hosts file on remote host racnode1 cat: /home/oracle/.ssh/known_hosts.tmp: No such file or directory cat: /home/oracle/.ssh/authorized_keys.tmp: No such file or directory SSH setup is complete.


Verifying SSH setup

The script will now run the date command on the remote nodes using ssh to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP, THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR PASSWORDS. If you see any output other than date or are prompted for the password, ssh is not setup correctly and you will need to resolve the issue and set up ssh again. The possible causes for failure could be:

  1. The server settings in /etc/ssh/sshd_config file do not allow ssh for user oracle.
  2. The server may have disabled public key based authentication.
  3. The client public key on the server may be outdated.
  4. ~oracle or ~oracle/.ssh on the remote host may not be owned by oracle.
  5. User may not have passed -shared option for shared remote users or may be passing the -shared option for non-shared remote users.
  6. If there is output in addition to the date, but no password is asked, it may be a security alert shown as part of company policy. Append the additional text to the /sysman/prov/resources/ignoreMessages.txt file.

    --racnode1:-- Running /usr/bin/ssh -x -l oracle racnode1 date to verify SSH connectivity has been setup from local host to racnode1. IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR. Mon Aug 26 05:16:05 UTC 2019


    Verifying SSH connectivity has been setup from racnode1 to racnode1

    IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Mon Aug 26 05:16:05 UTC 2019

    -Verification from racnode1 complete- SSH verification complete. su - $GRID_USER -c "ssh -o BatchMode=yes -o ConnectTimeout=5 $GRID_USER@$node echo ok 2>&1" su - $ORACLE_USER -c "ssh -o BatchMode=yes -o ConnectTimeout=5 $ORACLE_USER@$node echo ok 2>&1" -bash: /etc/rac_env_vars: Permission denied -bash: /etc/rac_env_vars: Permission denied

Verifying Physical Memory ...PASSED Verifying Available Physical Memory ...PASSED Verifying Swap Size ...PASSED Verifying Free Space: racnode1:/usr,racnode1:/var,racnode1:/etc,racnode1:/sbin,racnode1:/tmp,racnode1:/export/app/grid ...PASSED Verifying User Existence: grid ... Verifying Users With Same UID: 54332 ...PASSED Verifying User Existence: grid ...PASSED Verifying Group Existence: asmadmin ...PASSED Verifying Group Existence: dba ...PASSED Verifying Group Existence: oinstall ...PASSED Verifying Group Membership: dba ...PASSED Verifying Group Membership: asmadmin ...PASSED Verifying Group Membership: oinstall(Primary) ...PASSED Verifying Run Level ...PASSED Verifying Hard Limit: maximum open file descriptors ...PASSED Verifying Soft Limit: maximum open file descriptors ...PASSED Verifying Hard Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum stack size ...PASSED Verifying Architecture ...PASSED Verifying OS Kernel Version ...PASSED Verifying OS Kernel Parameter: semmsl ...PASSED Verifying OS Kernel Parameter: semmns ...PASSED Verifying OS Kernel Parameter: semopm ...PASSED Verifying OS Kernel Parameter: semmni ...PASSED Verifying OS Kernel Parameter: shmmax ...PASSED Verifying OS Kernel Parameter: shmmni ...PASSED Verifying OS Kernel Parameter: shmall ...PASSED Verifying OS Kernel Parameter: file-max ...PASSED Verifying OS Kernel Parameter: aio-max-nr ...PASSED Verifying OS Kernel Parameter: panic_on_oops ...PASSED Verifying Package: binutils-2.23.52.0.1 ...PASSED Verifying Package: compat-libcap1-1.10 ...PASSED Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED Verifying Package: sysstat-10.1.5 ...PASSED Verifying Package: ksh ...PASSED Verifying Package: make-3.82 ...PASSED Verifying Package: glibc-2.17 (x86_64) ...PASSED Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED Verifying Package: libaio-0.3.109 (x86_64) ...PASSED Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED Verifying Package: nfs-utils-1.2.3-15 ...PASSED Verifying Package: smartmontools-6.2-4 ...PASSED Verifying Package: net-tools-2.0-0.17 ...PASSED Verifying Port Availability for component "Oracle Remote Method Invocation (ORMI)" ...PASSED Verifying Port Availability for component "Oracle Notification Service (ONS)" ...PASSED Verifying Port Availability for component "Oracle Cluster Synchronization Services (CSSD)" ...PASSED Verifying Port Availability for component "Oracle Notification Service (ONS) Enterprise Manager support" ...PASSED Verifying Port Availability for component "Oracle Database Listener" ...PASSED Verifying Users With Same UID: 0 ...PASSED Verifying Current Group ID ...PASSED Verifying Root user consistency ...PASSED Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying Node Connectivity ...PASSED Verifying Multicast check ...PASSED Verifying ASM Integrity ... Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying Node Connectivity ...PASSED Verifying ASM Integrity ...PASSED Verifying Device Checks for ASM ... Verifying ASM device sharedness check ... Verifying Package: cvuqdisk-1.0.10-1 ...PASSED Verifying Shared Storage Accessibility:/dev/asm_disk2,/dev/asm_disk1 ...WARNING (PRVG-1615) Verifying ASM device sharedness check ...WARNING (PRVG-1615) Verifying Access Control List check ...PASSED Verifying Device Checks for ASM ...WARNING (PRVG-1615) Verifying I/O scheduler ... Verifying Package: cvuqdisk-1.0.10-1 ...PASSED Verifying I/O scheduler ...PASSED Verifying Network Time Protocol (NTP) ... Verifying '/etc/ntp.conf' ...PASSED Verifying '/var/run/ntpd.pid' ...PASSED Verifying '/var/run/chronyd.pid' ...PASSED Verifying Network Time Protocol (NTP) ...FAILED Verifying Same core file name pattern ...PASSED Verifying User Mask ...PASSED Verifying User Not In Group "root": grid ...PASSED Verifying Time zone consistency ...PASSED Verifying VIP Subnet configuration check ...PASSED Verifying resolv.conf Integrity ... Verifying (Linux) resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-10048) Verifying DNS/NIS name service ... Verifying Name Service Switch Configuration File Integrity ...PASSED Verifying DNS/NIS name service ...FAILED (PRVG-1101) Verifying Single Client Access Name (SCAN) ...PASSED Verifying Domain Sockets ...PASSED Verifying /boot mount ...PASSED Verifying Daemon "avahi-daemon" not configured and running ...PASSED Verifying Daemon "proxyt" not configured and running ...PASSED Verifying loopback network interface address ...PASSED Verifying Oracle base: /export/app/grid ... Verifying '/export/app/grid' ...PASSED Verifying Oracle base: /export/app/grid ...PASSED Verifying User Equivalence ...PASSED Verifying Network interface bonding status of private interconnect network interfaces ...PASSED Verifying File system mount options for path /var ...PASSED Verifying zeroconf check ...PASSED Verifying ASM Filter Driver configuration ...PASSED

Pre-check for cluster services setup was unsuccessful on all the nodes.

Warnings were encountered during execution of CVU verification request "stage -pre crsinst".

Verifying Device Checks for ASM ...WARNING Verifying ASM device sharedness check ...WARNING Verifying Shared Storage Accessibility:/dev/asm_disk2,/dev/asm_disk1 ...WARNING PRVG-1615 : Virtual environment detected. Skipping shared storage check for disks "/dev/asm_disk2,/dev/asm_disk1".

Verifying Network Time Protocol (NTP) ...FAILED Verifying resolv.conf Integrity ...FAILED racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11". racnode1: Check for integrity of file "/etc/resolv.conf" failed

Verifying (Linux) resolv.conf Integrity ...FAILED racnode1: PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racnode1 racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers o"127.0.0.11".

Verifying DNS/NIS name service ...FAILED PRVG-1101 : SCAN name "racnode-scan" failed to resolve

CVU operation performed: stage -pre crsinst Date: Aug 26, 2019 5:16:08 AM CVU home: /export/app/12.2.0/grid/ User: grid Launching Oracle Grid Infrastructure Setup Wizard...

[FATAL] [INS-30516] Please specify unique disk groups. CAUSE: Installer has detected that the diskgroup name provided already exists on the system. ACTION: Specify different disk group. [FATAL] [INS-30530] Following specified disks have invalid header status: [/dev/asm_disk1, /dev/asm_disk2] ACTION: Ensure only Candidate or Provisioned disks are specified. Check /export/app/12.2.0/grid/install/root_racnode1_2019-08-26_05-17-20-366551783.log for the output of root script Launching Oracle Grid Infrastructure Setup Wizard...

The configuration that needs to be performed as privileged user is not completed. The configuration tools can only be executed after that. You can find the logs of this session at: /export/app/oraInventory/logs/GridSetupActions2019-08-26_05-17-20AM

As a root user, execute the following script(s):

  1. /export/app/12.2.0/grid/root.sh

Execute /export/app/12.2.0/grid/root.sh on the following nodes: [racnode1]

After the successful root script execution, proceed to re-run the same 'gridSetup.sh -executeConfigTools' command.

Successfully Configured Software. [FATAL] [DBT-10602] (Oracle Real Application Cluster (RAC) database) database cannot be created in this system. CAUSE: Oracle Grid Infrastructure is not configured on the system. ACTION: Configure Oracle Grid Infrastructure prior to creation of (Oracle Real Application Cluster (RAC) database). Refer to Oracle Grid Infrastructure Installation Guide for installation and configuration steps. Checking on racnode1

[FATAL] [INS-30516] Please specify unique disk groups. CAUSE: Installer has detected that the diskgroup name provided already exists on the system. ACTION: Specify different disk group. [FATAL] [INS-30530] Following specified disks have invalid header status: [/dev/asm_disk1, /dev/asm_disk2] ACTION: Ensure only Candidate or Provisioned disks are specified. bash-4.2#

psaini79 commented 5 years ago

@babloo2642

Did you clean up the disks before racnode1 creation as specified in README.MD?

Also, please provide output of following: docker exec -i -t racnode1 /bin/bash crsctl check cluster ps -u grid

babloo2642 commented 5 years ago

@psaini79

I did not clean up the disks, after cleaning up the disks everything worked well including Racnode2. Thank you so much for your help !! Appreciate it. I think I can close the issue now.

psaini79 commented 5 years ago

@babloo2642

Yes, you can close the thread.

babloo2642 commented 5 years ago

@psaini79

Internally I’m able to connect to the database from both racnode1 and racnode2

[oracle@racnode1 ~]$ sqlplus system@\"racnode-scan:1521/ORCLCDB\"

SQL*Plus: Release 12.2.0.1.0 Production on Tue Aug 27 01:26:50 2019

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Enter password: Last Successful login time: Mon Aug 26 2019 19:08:59 +00:00

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> quit Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

[oracle@racnode2 ~]$ sqlplus system@\"racnode-scan:1521/ORCLCDB\"

SQL*Plus: Release 12.2.0.1.0 Production on Mon Aug 26 19:08:51 2019

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Enter password: Last Successful login time: Mon Aug 26 2019 18:35:30 +00:00

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> quit Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

#########################################################################

I’m unable to connect to RAC Database externally. From the racnode-cman1 able to connect, but from the docker host unable to telnet 1521.

[oracle@racnode-cman1 ~]$ lsnrctl status

LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 27-AUG-2019 02:08:47

Copyright (c) 1991, 2016, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=3aa1690ab82b)(PORT=1521))) TNS-12545: Connect failed because target host or object does not exist TNS-12560: TNS:protocol adapter error TNS-00515: Connect failed because target host or object does not exist Linux Error: 2: No such file or directory Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521))) TNS-12541: TNS:no listener TNS-12560: TNS:protocol adapter error TNS-00511: No listener Linux Error: 2: No such file or directory

[root@xxxxx ~]# ping racnode-cman1 PING racnode-cman1.example.com (172.16.1.15) 56(84) bytes of data. 64 bytes from racnode-cman1.example.com (172.16.1.15): icmp_seq=1 ttl=64 time=0.027 ms 64 bytes from racnode-cman1.example.com (172.16.1.15): icmp_seq=2 ttl=64 time=0.033 ms 64 bytes from racnode-cman1.example.com (172.16.1.15): icmp_seq=3 ttl=64 time=0.033 ms 64 bytes from racnode-cman1.example.com (172.16.1.15): icmp_seq=4 ttl=64 time=0.033 ms 64 bytes from racnode-cman1.example.com (172.16.1.15): icmp_seq=5 ttl=64 time=0.030 ms 64 bytes from racnode-cman1.example.com (172.16.1.15): icmp_seq=6 ttl=64 time=0.033 ms 64 bytes from racnode-cman1.example.com (172.16.1.15): icmp_seq=7 ttl=64 time=0.032 ms 64 bytes from racnode-cman1.example.com (172.16.1.15): icmp_seq=8 ttl=64 time=0.033 ms ^C --- racnode-cman1.example.com ping statistics --- 8 packets transmitted, 8 received, 0% packet loss, time 6999ms rtt min/avg/max/mdev = 0.027/0.031/0.033/0.007 ms

[root@xxxxx ~]# telnet racnode-cman1 1521 Trying 172.16.1.15... telnet: connect to address 172.16.1.15: Connection refused

[oracle@xxxxx ~]$ sqlplus system@\"racnode-cman1:1521/ORCLCDB\"

SQL*Plus: Release 12.2.0.1.0 Production on Mon Aug 26 19:23:16 2019

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Enter password: ERROR: ORA-12541: TNS:no listener

Enter user-name: system Enter password: ERROR: ORA-12162: TNS:net service name is incorrectly specified

psaini79 commented 5 years ago

@babloo2642

This is a new issue of RAC connection so you should have created new thread. However, please provide following:

docker ps -a

Did you map the port 1521 to docker host?

psaini79 commented 5 years ago

Also, execute following steps:

Add service on racnode1 as a oracle user: srvctl add service -db ORCLCDB -service testsvc -preferred "racnode1,racnode2" srvctl status service -d ORCLCDB lsnrctl status

Login to CMAN and try to connect to new service

On Docker host, add the connect string using new service in tnsnames.ora and refer to CMAN host. 1) tnsping 2) sqlplus system/password>@<connect_string

Paste the output

babloo2642 commented 5 years ago

@psaini79

I'm closing this issue and created a new thread #1365. Please find the output in #1365. Thank you.