oraclebase / vagrant

Vagrant Builds
https://oracle-base.com/
GNU General Public License v3.0
190 stars 169 forks source link

Disk full issue cannot proceed on grid installation #36

Open charmiedba23 opened 1 year ago

charmiedba23 commented 1 year ago

I have lots of errors when I tried the Oracle19c on Linux 8. Not sure why disk becomes full. Please help.

default: ** default: Do grid software-only installation. Sun Apr 2 11:51:58 UTC 2023 default: ** default: Preparing the home to patch... default: Applying the patch /u01/software/34773504/34762026... default: Successfully applied the patch. default: The log can be found at: /tmp/GridSetupActions2023-04-02_11-51-58AM/installerPatchActions_2023-04-02_11-51-58AM.log default: Launching Oracle Grid Infrastructure Setup Wizard... default: default: [WARNING] [INS-41808] Possible invalid choice for OSASM Group. default: CAUSE: The name of the group you selected for the OSASM group is commonly used to grant other system privileges (For example: asmdba, asmoper, dba, oper). default: ACTION: Oracle recommends that you designate asmadmin as the OSASM group. default: [WARNING] [INS-41809] Possible invalid choice for OSDBA Group. default: CAUSE: The group name you selected as the OSDBA for ASM group is commonly used for Oracle Database administrator privileges. default: ACTION: Oracle recommends that you designate asmdba as the OSDBA for ASM group, and that the group should not be the same group as an Oracle Database OSDBA group. default: [WARNING] [INS-41812] OSDBA and OSASM are the same OS group. default: CAUSE: The chosen values for OSDBA group and the chosen value for OSASM group are the same. default: ACTION: Select an OS group that is unique for ASM administrators. The OSASM group should not be the same as the OS groups that grant privileges for Oracle ASM access, or for database administration. default: [WARNING] [INS-40109] The specified Oracle Base location is not empty on this server. default: ACTION: Specify an empty location for Oracle Base. default: [WARNING] [INS-32044] Specified location (/u01/app/19.0.0/grid) is on a volume without enough disk space on nodes: default: [ol8-19-rac2]. default: default: These nodes will be ignored and not participate in the configured Grid Infrastructure. default: CAUSE: Specified location is on a volume with insufficient disk space. Required disk space: 6.9 GB. default: [FATAL] [INS-32070] Could not remove the nodes [ol8-19-rac2] corresponding to following error code: INS-32044. default: CAUSE: Installer requires that a minimum of 2 nodes remain for the Oracle Grid Infrastructure configuration to proceed. default: ACTION: Ensure that at least 2 nodes remain for the configuration to proceed, otherwise specify a single cluster node information. default: ADDITIONAL INFORMATION: default: - [INS-32044] Specified location (/u01/app/19.0.0/grid) is on a volume without enough disk space on nodes: [ol8-19-rac2]. These nodes will be ignored and not participate in the configured Grid Infrastructure. default: default: - Cause: Specified location is on a volume with insufficient disk space. Required disk space: 6.9 GB. default: default: default: Moved the install session logs to: default: /u01/app/oraInventory/logs/GridSetupActions2023-04-02_11-51-58AM default: ** default: Run grid root scripts. Sun Apr 2 11:57:32 UTC 2023 default: ** default: sh: /u01/app/oraInventory/orainstRoot.sh: No such file or directory default: sh: /u01/app/oraInventory/orainstRoot.sh: No such file or directory default: Check /u01/app/19.0.0/grid/install/root_ol8-19-rac1_2023-04-02_11-57-32-702896729.log for the output of root script default: sh: /u01/app/19.0.0/grid/root.sh: No such file or directory default: ** default: Do grid configuration. Sun Apr 2 11:57:33 UTC 2023 default: ** default: Launching Oracle Grid Infrastructure Setup Wizard... default: default: [FATAL] [INS-32603] The central inventory was not detected. default: ACTION: The -executeConfigTools flag can only be used for an Oracle home software that has been already installed using the configure or upgrade options. Ensure that the orainstRoot.sh script, from the inventory location, has been executed. default: Moved the install session logs to: default: /u01/app/oraInventory/logs/GridSetupActions2023-04-02_11-57-33AM default: ** default: Create additional diskgroups. Sun Apr 2 11:57:35 UTC 2023 default: ** default: /vagrant/scripts/oracle_grid_software_config.sh: line 48: /u01/app/19.0.0/grid/bin/sqlplus: Permission denied default: ** default: Check cluster configuration. Sun Apr 2 11:57:35 UTC 2023 default: ** default: /vagrant/scripts/oracle_grid_software_config.sh: line 63: /u01/app/19.0.0/grid/bin/crsctl: No such file or directory default: ** default: Unzip database software. Sun Apr 2 11:57:35 UTC 2023 default: ** default: ** default: Do database software-only installation. Sun Apr 2 11:59:07 UTC 2023 default: ** default: Preparing the home to patch... default: Applying the patch /u01/software/34773504/34762026... default: Successfully applied the patch. default: The log can be found at: /tmp/InstallActions2023-04-02_11-59-08AM/installerPatchActions_2023-04-02_11-59-08AM.log default: Launching Oracle Database Setup Wizard... default: default: [FATAL] [INS-35354] The system on which you are attempting to install Oracle RAC is not part of a valid cluster. default: CAUSE: Before you can install Oracle RAC, you must install Oracle Grid Infrastructure (Oracle Clusterware and Oracle ASM) on all servers to create a cluster. default: ACTION: Oracle Grid Infrastructure for Clusterware is not installed. Install it either from the separate installation media included in your media pack, or install it by downloading it from Electronic Product Delivery (EPD) or the Oracle Technology Network (OTN). Oracle Grid Infrastructure normally is installed by a different operating system user than the one used for Oracle Database. It may need to be installed by your system administrator. See the installation guide for more details. default: Moved the install session logs to: default: /u01/app/oraInventory/logs/InstallActions2023-04-02_11-59-08AM default: ** default: Run DB root scripts. Sun Apr 2 12:04:45 UTC 2023 default: ** default: Check /u01/app/oracle/product/19.0.0/dbhome_1/install/root_ol8-19-rac1_2023-04-02_12-04-45-806820172.log for the output of root script default: sh: /u01/app/oracle/product/19.0.0/dbhome_1/root.sh: No such file or directory default: ** default: OJVM Patch for DB Software. Sun Apr 2 12:04:46 UTC 2023 default: ** default: ** default: Patch Oracle Grid Infrastructure Software. Sun Apr 2 12:04:46 UTC 2023 default: HOSTNAME=localhost.localdomain default: ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1 default: ** default: ** default: Unzip software. Sun Apr 2 12:04:46 UTC 2023 default: ** default: Can't call method "uid" on an undefined value at /u01/app/oracle/product/19.0.0/dbhome_1/OPatch/auto/database/bin/module/DBUtilServices.pm line 29. default: ** default: Patch Oracle Grid Infrastructure Software. Sun Apr 2 12:04:46 UTC 2023 default: HOSTNAME=ol8-19-rac2 default: ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1 default: ** default: ** default: Unzip software. Sun Apr 2 12:04:46 UTC 2023 default: ** default: 34773504/34762026/34765931/files/lib/libserver19.a/kdst1011.o: write error (disk full?). Continue? (y/n/^C) default: warning: 34773504/34762026/34765931/files/lib/libserver19.a/kdst1011.o is probably truncated default: /vagrant_scripts/oracle_software_patch.sh: line 26: opatchauto: command not found default: ** default: Create database. Sun Apr 2 12:05:15 UTC 2023 default: ** default: [FATAL] java.lang.NullPointerException default: ** default: Save state of PDB to enable auto-start. Sun Apr 2 12:05:17 UTC 2023 default: ** default: /vagrant/scripts/oracle_create_database.sh: line 32: /u01/app/oracle/product/19.0.0/dbhome_1/bin/sqlplus: Permission denied default: ** default: Check cluster configuration. Sun Apr 2 12:05:17 UTC 2023 default: ** default: ** default: Output from crsctl stat res -t Sun Apr 2 12:05:17 UTC 2023 default: ** default: /vagrant/scripts/oracle_create_database.sh: line 44: /u01/app/19.0.0/grid/bin/crsctl: No such file or directory default: ** default: Output from srvctl config database -d cdbrac Sun Apr 2 12:05:17 UTC 2023 default: ** default: /u01/app/oracle/product/19.0.0/dbhome_1/bin/srvctl: line 259: /u01/app/oracle/product/19.0.0/dbhome_1/srvm/admin/getcrshome: No such file or directory default: PRCD-1027 : Failed to retrieve database cdbrac default: PRCR-1070 : Failed to check if resource ora.cdbrac.db is registered default: CRS-0184 : Cannot communicate with the CRS daemon. default: ** default: Output from srvctl status database -d cdbrac Sun Apr 2 12:05:17 UTC 2023 default: ** default: /u01/app/oracle/product/19.0.0/dbhome_1/bin/srvctl: line 259: /u01/app/oracle/product/19.0.0/dbhome_1/srvm/admin/getcrshome: No such file or directory default: PRCD-1027 : Failed to retrieve database cdbrac default: PRCR-1070 : Failed to check if resource ora.cdbrac.db is registered default: CRS-0184 : Cannot communicate with the CRS daemon. default: ** default: Output from v$active_instances Sun Apr 2 12:05:18 UTC 2023 default: ** default: /vagrant/scripts/oracle_create_database.sh: line 59: /u01/app/oracle/product/19.0.0/dbhome_1/bin/sqlplus: Permission denied default: ** default: Setup End. Sun Apr 2 12:05:18 UTC 2023 default: **

oraclebase commented 1 year ago

Have you logged into the VMs and checked the disk space situation?

You should have the root drive, and a separate virtual disk for /u01 that can expand to 100G. If for some reason the virtual disk to support /u01 has not been created, you may have put /u01 on the root disk, which will mean there is not enough space to complete the installation.

This sort of thing normally happens for one of several reasons:

1) You've altered the config files and made a mistake. 2) Your host PC doesn't have enough disk space to complete the operation. 3) There is something wrong with your copy of the git repository. Reclone it.

Cheers

Tim...

charmiedba23 commented 1 year ago

In vagrant.yml I only changed the asm disks location

shared: box: oraclebase/oracle-8 non_rotational: 'on' asm_crs_disk_1: C:\Users\charm\Documents\Oracle19cLinux8\shared\ol8_19c_rac\asm_crs_disk_1.vdi asm_crs_disk_2: C:\Users\charm\Documents\Oracle19cLinux8\shared\ol8_19c_rac\asm_crs_disk_2.vdi asm_crs_disk_3: C:\Users\charm\Documents\Oracle19cLinux8\shared\ol8_19c_rac\asm_crs_disk_3.vdi asm_crs_disk_size: 2 asm_data_disk_1: C:\Users\charm\Documents\Oracle19cLinux8\shared\ol8_19c_rac\asm_data_disk_1.vdi asm_data_disk_size: 40 asm_reco_disk_1: C:\Users\charm\Documents\Oracle19cLinux8\shared\ol8_19c_rac\asm_reco_disk_1.vdi asm_reco_disk_size: 20

dns: vm_name: ol8_19_dns mem_size: 1024 cpus: 2 public_ip: 192.168.56.100

node1: vm_name: ol8_19_rac1 mem_size: 7168 cpus: 4 public_ip: 192.168.56.101 private_ip: 192.168.1.101 u01_disk: .\ol8_19_rac1_u01.vdi

node2: vm_name: ol8_19_rac2 mem_size: 6144 cpus: 4 public_ip: 192.168.56.102 private_ip: 192.168.1.102 u01_disk: .\ol8_19_rac2_u01.vdi

charmiedba23 commented 1 year ago

My PC still has 600GB available. I didn't change anything aside from the asm disk location

charmiedba23 commented 1 year ago

Oh I remember, the patches were changed. This patch no longer available

Patch 30783556: COMBO OF OJVM RU COMPONENT 19.7.0.0.200414 + GI RU 19.7.0.0.200414

So I downloaded this latest one

Patch 34773504: Combo OJVM RU 19.18.0.0.230117 and GI RU 19.18.0.230117

oraclebase commented 1 year ago

Are you using a really old copy of the repository? The repo was updated for the January 2023 patches a couple of months ago.

Please make sure you do a git pull and try again.

Also, did you actually log into the VMs and check the disk space internally? What do you see?

charmiedba23 commented 1 year ago

Ok will try again. Thanks Tim!Regards,Charmaine L. MedinaOn 2 Apr 2023, at 20:51, Tim Hall @.***> wrote: Are you using a really old copy of the repository? The repo was updated for the January 2023 patches a couple of months ago. Please make sure you do a git pull and try again. Also, did you actually log into the VMs and check the disk space internally? What do you see?

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.***>

MrGuruTest commented 1 year ago

Hello Tim,

I face this issue too. After investigating.Sometime at provisioning VM using vagrant, The disk partition in VM guest is not "sdb". The "sdb" assign to some size 2GB. So it is cause by the "Disk full issue cannot proceed on grid installation". I'm not sure why. But I workaround by destroy and re-provision the VM again and the issue resolved.

However, The issue is still on the raw disk shared across nodes. Node1: [root@ol8-19-rac1 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 64G 0 disk ├─sda1 8:1 0 1G 0 part /boot ├─sda2 8:2 0 4G 0 part [SWAP] └─sda3 8:3 0 59G 0 part / sdb 8:16 0 100G 0 disk └─sdb1 8:17 0 100G 0 part /u01 sdc 8:32 0 2G 0 disk └─sdc1 8:33 0 2G 0 part sdd 8:48 0 2G 0 disk └─sdd1 8:49 0 2G 0 part sde 8:64 0 2G 0 disk └─sde1 8:65 0 2G 0 part sdf 8:80 0 20G 0 disk └─sdf1 8:81 0 20G 0 part sdg 8:96 0 40G 0 disk └─sdg1 8:97 0 40G 0 part

Node2: [root@ol8-19-rac2 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 64G 0 disk ├─sda1 8:1 0 1G 0 part /boot ├─sda2 8:2 0 4G 0 part [SWAP] └─sda3 8:3 0 59G 0 part / sdb 8:16 0 2G 0 disk └─sdb1 8:17 0 2G 0 part sdc 8:32 0 100G 0 disk └─sdc1 8:33 0 100G 0 part /u01 sdd 8:48 0 2G 0 disk └─sdd1 8:49 0 2G 0 part sde 8:64 0 40G 0 disk <<<< This └─sde1 8:65 0 40G 0 part sdf 8:80 0 2G 0 disk <<<< This └─sdf1 8:81 0 2G 0 part sdg 8:96 0 20G 0 disk └─sdg1 8:97 0 20G 0 part [root@ol8-19-rac2 ~]#

On Node2, you can see "sde" and "sdf" invalid sizing and difference on "configure_shared_disk.sh" Which effect to invalid disk name. [root@ol8-19-rac1 ~]# ls -al /dev/oracleasm/* lrwxrwxrwx. 1 root root 7 Jun 16 08:13 /dev/oracleasm/asm-crs-disk1 -> ../sdc1 lrwxrwxrwx. 1 root root 7 Jun 16 08:13 /dev/oracleasm/asm-crs-disk2 -> ../sdd1 lrwxrwxrwx. 1 root root 7 Jun 16 08:13 /dev/oracleasm/asm-crs-disk3 -> ../sde1 lrwxrwxrwx. 1 root root 7 Jun 16 08:13 /dev/oracleasm/asm-data-disk1 -> ../sdf1 lrwxrwxrwx. 1 root root 7 Jun 16 08:13 /dev/oracleasm/asm-reco-disk1 -> ../sdg1

[root@ol8-19-rac2 ~]# ls -al /dev/oracleasm/* lrwxrwxrwx. 1 root root 7 Jun 16 07:43 /dev/oracleasm/asm-crs-disk1 -> ../sdb1 lrwxrwxrwx. 1 root root 7 Jun 16 07:43 /dev/oracleasm/asm-crs-disk2 -> ../sde1 lrwxrwxrwx. 1 root root 7 Jun 16 07:43 /dev/oracleasm/asm-crs-disk3 -> ../sdg1 lrwxrwxrwx. 1 root root 7 Jun 16 07:43 /dev/oracleasm/asm-data-disk1 -> ../sdf1 lrwxrwxrwx. 1 root root 7 Jun 16 07:43 /dev/oracleasm/asm-reco-disk1 -> ../sdd1 [root@ol8-19-rac2 ~]#

I tried to remove and provision the VM many times but not help. This issue occurs on both nodes, If I new provision by vagrant.

Could you advise, please?

Best Regards, Amornchai L.

oraclebase commented 1 year ago

Hi. I have no explanation for this. I can't reproduce it. Every time I destroy and recreate the system it just works for me. All I can suggest is:

charmiedba23 commented 1 year ago

Updating from the repo works for me!!!Thanks!!!Charmaine L. MedinaOn 16 Jun 2023, at 16:28, Tim Hall @.***> wrote: Hi. I have no explanation for this. I can't reproduce it. Every time I destroy and recreate the system it just works for me. All I can suggest is:

Make sure you have the latest version of the repo. Make sure you have the latest version of VirtualBox. Make sure you have the latest version of Vagrant. Destroy everything, and make sure there are no stray files left behind. Literally do a manual check for the presence of VMDK files. Build it again...

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.***>

oraclebase commented 1 year ago

Great news!