oracle / vagrant-projects

Vagrant projects for Oracle products and other examples
Universal Permissive License v1.0
948 stars 476 forks source link

OracleRAC OL8 - failed to create /u01 as it presented as /dev/sdc #520

Closed DT-234 closed 2 months ago

DT-234 commented 2 months ago

Try to build the OracleRAC OL8, /u01 presented as /dev/sdc instead of /dev/sdb

[root@node1 ~]# lsblk -pn
/dev/sda                          8:0    0   37G  0 disk
├─/dev/sda1                       8:1    0    1G  0 part /boot
└─/dev/sda2                       8:2    0   36G  0 part
  ├─/dev/mapper/vg_main-lv_root 252:0    0   32G  0 lvm  /
  └─/dev/mapper/vg_main-lv_swap 252:1    0    4G  0 lvm  [SWAP]
/dev/sdb                          8:16   0   10G  0 disk
└─/dev/sdb1                       8:17   0   10G  0 part
/dev/sdc                          8:32   0  100G  0 disk      <===== /u01 disk
/dev/sdd                          8:48   0   10G  0 disk
├─/dev/sdd1                       8:49   0    8G  0 part
└─/dev/sdd2                       8:50   0    2G  0 part
/dev/sde                          8:64   0   10G  0 disk
├─/dev/sde1                       8:65   0    8G  0 part
└─/dev/sde2                       8:66   0    2G  0 part
/dev/sdf                          8:80   0   10G  0 disk
├─/dev/sdf1                       8:81   0    8G  0 part
└─/dev/sdf2                       8:82   0    2G  0 part

Action Taken reproduce:

1. # vagrant destroy -f
2. Set the asm_disk_size: 10 in OracleRAC\OL8\config\vagrant.yml
3. # vagrant up

Log as attached VirtualBox_vagrant_failed to create u01 as it presented as sdc.txt

Action Taken solve:

1. # vagrant destroy -f
2. Set the asm_disk_size: 11 in OracleRAC\OL8\config\vagrant.yml
3. # vagrant up

=========================================================================

Environment (please complete the following information):

rcitton commented 2 months ago

Same Vagrant, same VBox I can not reproduce and I can not see in which way "asm_disk_size" should change the behavior. Having asm_disk_size: 10, here the log evidence from where sdb is in use for u01:

    node2: -----------------------------------------------------------------
    node2: INFO: 2024-08-22 11:06:06: Setting-up /u01 disk
    node2: -----------------------------------------------------------------
    node2:   Physical volume "/dev/sdb1" successfully created.
    node2:   Volume group "VolGroupU01" successfully created
    node2:   Logical volume "LogVolU01" created.
    node2: meta-data=/dev/VolGroupU01/LogVolU01 isize=512    agcount=4, agsize=6553344 blks
    node2:          =                       sectsz=512   attr=2, projid32bit=1
    node2:          =                       crc=1        finobt=1, sparse=1, rmapbt=0
    node2:          =                       reflink=1    bigtime=0 inobtcount=0
    node2: data     =                       bsize=4096   blocks=26213376, imaxpct=25
    node2:          =                       sunit=0      swidth=0 blks
    node2: naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
    node2: log      =internal log           bsize=4096   blocks=25600, version=2
    node2:          =                       sectsz=512   sunit=0 blks, lazy-count=1
    node2: realtime =none                   extsz=4096   blocks=0, rtextents=0
(...)
    node1: -----------------------------------------------------------------
    node1: INFO: 2024-08-22 11:17:04: Setting-up /u01 disk
    node1: -----------------------------------------------------------------
    node1:   Physical volume "/dev/sdb1" successfully created.
    node1:   Volume group "VolGroupU01" successfully created
    node1:   Logical volume "LogVolU01" created.
    node1: meta-data=/dev/VolGroupU01/LogVolU01 isize=512    agcount=4, agsize=6553344 blks
    node1:          =                       sectsz=512   attr=2, projid32bit=1
    node1:          =                       crc=1        finobt=1, sparse=1, rmapbt=0
    node1:          =                       reflink=1    bigtime=0 inobtcount=0
    node1: data     =                       bsize=4096   blocks=26213376, imaxpct=25
    node1:          =                       sunit=0      swidth=0 blks
    node1: naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
    node1: log      =internal log           bsize=4096   blocks=25600, version=2
    node1:          =                       sectsz=512   sunit=0 blks, lazy-count=1
    node1: realtime =none                   extsz=4096   blocks=0, rtextents=0

from inside the VMs:

[vagrant@node2 ~]$ lsblk -pn
/dev/sda                                8:0    0   37G  0 disk 
├─/dev/sda1                             8:1    0    1G  0 part /boot
└─/dev/sda2                             8:2    0   36G  0 part 
  ├─/dev/mapper/vg_main-lv_root       252:0    0   32G  0 lvm  /
  └─/dev/mapper/vg_main-lv_swap       252:1    0    4G  0 lvm  [SWAP]
/dev/sdb                                8:16   0  100G  0 disk 
└─/dev/sdb1                             8:17   0  100G  0 part 
  └─/dev/mapper/VolGroupU01-LogVolU01 252:2    0  100G  0 lvm  /u01
/dev/sdc                                8:32   0   10G  0 disk 
├─/dev/sdc1                             8:33   0    8G  0 part 
└─/dev/sdc2                             8:34   0    2G  0 part 
/dev/sdd                                8:48   0   10G  0 disk 
├─/dev/sdd1                             8:49   0    8G  0 part 
└─/dev/sdd2                             8:50   0    2G  0 part 
/dev/sde                                8:64   0   10G  0 disk 
├─/dev/sde1                             8:65   0    8G  0 part 
└─/dev/sde2                             8:66   0    2G  0 part 
/dev/sdf                                8:80   0   10G  0 disk 
├─/dev/sdf1                             8:81   0    8G  0 part 
└─/dev/sdf2                             8:82   0    2G  0 part 

--

[vagrant@node1 ~]$ lsblk -pn
/dev/sda                                8:0    0   37G  0 disk 
├─/dev/sda1                             8:1    0    1G  0 part /boot
└─/dev/sda2                             8:2    0   36G  0 part 
  ├─/dev/mapper/vg_main-lv_root       252:0    0   32G  0 lvm  /
  └─/dev/mapper/vg_main-lv_swap       252:1    0    4G  0 lvm  [SWAP]
/dev/sdb                                8:16   0  100G  0 disk 
└─/dev/sdb1                             8:17   0  100G  0 part 
  └─/dev/mapper/VolGroupU01-LogVolU01 252:2    0  100G  0 lvm  /u01
/dev/sdc                                8:32   0   10G  0 disk 
├─/dev/sdc1                             8:33   0    8G  0 part 
└─/dev/sdc2                             8:34   0    2G  0 part 
/dev/sdd                                8:48   0   10G  0 disk 
├─/dev/sdd1                             8:49   0    8G  0 part 
└─/dev/sdd2                             8:50   0    2G  0 part 
/dev/sde                                8:64   0   10G  0 disk 
├─/dev/sde1                             8:65   0    8G  0 part 
└─/dev/sde2                             8:66   0    2G  0 part 
/dev/sdf                                8:80   0   10G  0 disk 
├─/dev/sdf1                             8:81   0    8G  0 part 
└─/dev/sdf2                             8:82   0    2G  0 part 
DT-234 commented 2 months ago

Same Vagrant, same VBox

I can not reproduce and I can not see in which way "asm_disk_size" should change the behavior. Having asm_disk_size: 10, here the log evidence from where sdb is in use for u01:

    node2: -----------------------------------------------------------------
    node2: INFO: 2024-08-22 11:06:06: Setting-up /u01 disk
    node2: -----------------------------------------------------------------
    node2:   Physical volume "/dev/sdb1" successfully created.
    node2:   Volume group "VolGroupU01" successfully created
    node2:   Logical volume "LogVolU01" created.
    node2: meta-data=/dev/VolGroupU01/LogVolU01 isize=512    agcount=4, agsize=6553344 blks
    node2:          =                       sectsz=512   attr=2, projid32bit=1
    node2:          =                       crc=1        finobt=1, sparse=1, rmapbt=0
    node2:          =                       reflink=1    bigtime=0 inobtcount=0
    node2: data     =                       bsize=4096   blocks=26213376, imaxpct=25
    node2:          =                       sunit=0      swidth=0 blks
    node2: naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
    node2: log      =internal log           bsize=4096   blocks=25600, version=2
    node2:          =                       sectsz=512   sunit=0 blks, lazy-count=1
    node2: realtime =none                   extsz=4096   blocks=0, rtextents=0
(...)
    node1: -----------------------------------------------------------------
    node1: INFO: 2024-08-22 11:17:04: Setting-up /u01 disk
    node1: -----------------------------------------------------------------
    node1:   Physical volume "/dev/sdb1" successfully created.
    node1:   Volume group "VolGroupU01" successfully created
    node1:   Logical volume "LogVolU01" created.
    node1: meta-data=/dev/VolGroupU01/LogVolU01 isize=512    agcount=4, agsize=6553344 blks
    node1:          =                       sectsz=512   attr=2, projid32bit=1
    node1:          =                       crc=1        finobt=1, sparse=1, rmapbt=0
    node1:          =                       reflink=1    bigtime=0 inobtcount=0
    node1: data     =                       bsize=4096   blocks=26213376, imaxpct=25
    node1:          =                       sunit=0      swidth=0 blks
    node1: naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
    node1: log      =internal log           bsize=4096   blocks=25600, version=2
    node1:          =                       sectsz=512   sunit=0 blks, lazy-count=1
    node1: realtime =none                   extsz=4096   blocks=0, rtextents=0

from inside the VMs:

[vagrant@node2 ~]$ lsblk -pn
/dev/sda                                8:0    0   37G  0 disk 
├─/dev/sda1                             8:1    0    1G  0 part /boot
└─/dev/sda2                             8:2    0   36G  0 part 
  ├─/dev/mapper/vg_main-lv_root       252:0    0   32G  0 lvm  /
  └─/dev/mapper/vg_main-lv_swap       252:1    0    4G  0 lvm  [SWAP]
/dev/sdb                                8:16   0  100G  0 disk 
└─/dev/sdb1                             8:17   0  100G  0 part 
  └─/dev/mapper/VolGroupU01-LogVolU01 252:2    0  100G  0 lvm  /u01
/dev/sdc                                8:32   0   10G  0 disk 
├─/dev/sdc1                             8:33   0    8G  0 part 
└─/dev/sdc2                             8:34   0    2G  0 part 
/dev/sdd                                8:48   0   10G  0 disk 
├─/dev/sdd1                             8:49   0    8G  0 part 
└─/dev/sdd2                             8:50   0    2G  0 part 
/dev/sde                                8:64   0   10G  0 disk 
├─/dev/sde1                             8:65   0    8G  0 part 
└─/dev/sde2                             8:66   0    2G  0 part 
/dev/sdf                                8:80   0   10G  0 disk 
├─/dev/sdf1                             8:81   0    8G  0 part 
└─/dev/sdf2                             8:82   0    2G  0 part 

--

[vagrant@node1 ~]$ lsblk -pn
/dev/sda                                8:0    0   37G  0 disk 
├─/dev/sda1                             8:1    0    1G  0 part /boot
└─/dev/sda2                             8:2    0   36G  0 part 
  ├─/dev/mapper/vg_main-lv_root       252:0    0   32G  0 lvm  /
  └─/dev/mapper/vg_main-lv_swap       252:1    0    4G  0 lvm  [SWAP]
/dev/sdb                                8:16   0  100G  0 disk 
└─/dev/sdb1                             8:17   0  100G  0 part 
  └─/dev/mapper/VolGroupU01-LogVolU01 252:2    0  100G  0 lvm  /u01
/dev/sdc                                8:32   0   10G  0 disk 
├─/dev/sdc1                             8:33   0    8G  0 part 
└─/dev/sdc2                             8:34   0    2G  0 part 
/dev/sdd                                8:48   0   10G  0 disk 
├─/dev/sdd1                             8:49   0    8G  0 part 
└─/dev/sdd2                             8:50   0    2G  0 part 
/dev/sde                                8:64   0   10G  0 disk 
├─/dev/sde1                             8:65   0    8G  0 part 
└─/dev/sde2                             8:66   0    2G  0 part 
/dev/sdf                                8:80   0   10G  0 disk 
├─/dev/sdf1                             8:81   0    8G  0 part 
└─/dev/sdf2                             8:82   0    2G  0 part 

As this u01 issue come across with issue #519. Will revert the solution in #519 and try again with 10G asm_disk_size.

DT-234 commented 2 months ago

Action take as below:

1. Below output after clean up the OracleRAC\OL8

/home/mobaxterm/VBVMs/vagrant-projects-main/OracleRAC/OL8  vagrant global-status

--------------------
Detected virtualbox
--------------------
getting Proxy Configuration from Host...
id       name   provider   state    directory
----------------------------------------------------------------------------------------------------
569947a  host1  virtualbox poweroff D:/Software/Virtual Machines/vagrant-projects-main/OracleDG/OL8
5f53156  host2  virtualbox poweroff D:/Software/Virtual Machines/vagrant-projects-main/OracleDG/OL8

/home/mobaxterm/VBVMs/vagrant-projects-main/OracleRAC/OL8  VBoxManage.exe list hdds | grep Location | grep -v Snap
Location:       D:\Software\Virtual Machines\dg-213-ol8\dg-213-ol8-primary\box-disk001.vmdk
Location:       D:\Software\Virtual Machines\vagrant-projects-main\OracleDG\OL8\primary_u01.vdi
Location:       D:\Software\Virtual Machines\vagrant-projects-main\OracleDG\OL8\primary_oradata_disk0.vdi
Location:       D:\Software\Virtual Machines\vagrant-projects-main\OracleDG\OL8\primary_oradata_disk1.vdi
Location:       D:\Software\Virtual Machines\dg-213-ol8\dg-213-ol8-standby\box-disk001.vmdk
Location:       D:\Software\Virtual Machines\vagrant-projects-main\OracleDG\OL8\standby_u01.vdi
Location:       D:\Software\Virtual Machines\vagrant-projects-main\OracleDG\OL8\standby_oradata_disk0.vdi
Location:       D:\Software\Virtual Machines\vagrant-projects-main\OracleDG\OL8\standby_oradata_disk1.vdi

2. Windows host resources before perform any task

/home/mobaxterm  systeminfo | egrep -w "OS Name|OS Version|Memory"
OS Name:                   Microsoft Windows 11 Pro N
OS Version:                10.0.22631 N/A Build 22631
Total Physical Memory:     32,767 MB
Available Physical Memory: 24,852 MB
Virtual Memory: Max Size:  65,535 MB
Virtual Memory: Available: 56,127 MB
Virtual Memory: In Use:    9,408 MB

3. Remove the sleep 60 in OracleRAC/OL8/scripts/setup.sh as mentioned in #519, attached setup.txt renamed to setup.txt

4. Change the asm_disk_size to 10 in OracleRAC/OL8/config/vagrant.yml, attached vagrant.txt, renamed to vagrant.txt

5. /home/mobaxterm/VBVMs/vagrant-projects-main/OracleRAC/OL8  vagrant up

6. Failed to mount after UEK6 installed and reboot

    node2: Rebooting to make UEK6 the running kernel, a prerequisite for ASMLib
==> node2: Waiting for machine to reboot...
==> node2: Running provisioner: shell...
    node2: Running: inline script
==> node2: Running provisioner: shell...
    node2: Running: C:/Users/user/AppData/Local/Temp/Mxt242/tmp/vagrant-shell20240822-17556-pa3ifp.sh
    node2: /sbin/mount.vboxsf: mounting failed with the error: No such device
    node2: -----------------------------------------------------------------
    node2: 2024-08-22 15:26:04: Make the setup.env
    node2: -----------------------------------------------------------------
    node2: /tmp/vagrant-shell: line 487: /vagrant/config/setup.env: No such file or directory

    node1: Rebooting to make UEK6 the running kernel, a prerequisite for ASMLib
==> node1: Waiting for machine to reboot...
==> node1: Running provisioner: shell...
    node1: Running: inline script
==> node1: Running provisioner: shell...
    node1: Running: C:/Users/user/AppData/Local/Temp/Mxt242/tmp/vagrant-shell20240822-17556-50aisx.sh
    node1: /sbin/mount.vboxsf: mounting failed with the error: No such device
    node1: -----------------------------------------------------------------
    node1: 2024-08-22 15:53:55: Setup the environment variables
    node1: -----------------------------------------------------------------
    node1: /tmp/vagrant-shell: line 583: /vagrant/config/setup.env: No such file or directory

7. Windows host resources while node1 reboot after installed UEK6

/home/mobaxterm  systeminfo | egrep -w "OS Name|OS Version|Memory"
OS Name:                   Microsoft Windows 11 Pro N
OS Version:                10.0.22631 N/A Build 22631
Total Physical Memory:     32,767 MB
Available Physical Memory: 20,686 MB
Virtual Memory: Max Size:  65,535 MB
Virtual Memory: Available: 52,146 MB
Virtual Memory: In Use:    13,389 MB

8. Execute vagrant provision to continue the deployment as it was failed previously due to mount failed. This time it present the u01 as sdb on both node.

[root@node1 ~]# lsblk -pn
/dev/sda                                8:0    0   37G  0 disk
├─/dev/sda1                             8:1    0    1G  0 part /boot
└─/dev/sda2                             8:2    0   36G  0 part
  ├─/dev/mapper/vg_main-lv_root       252:0    0   32G  0 lvm  /
  └─/dev/mapper/vg_main-lv_swap       252:1    0    4G  0 lvm  [SWAP]
/dev/sdb                                8:16   0  100G  0 disk
└─/dev/sdb1                             8:17   0  100G  0 part
  └─/dev/mapper/VolGroupU01-LogVolU01 252:2    0  100G  0 lvm  /u01
/dev/sdc                                8:32   0   10G  0 disk
├─/dev/sdc1                             8:33   0    8G  0 part
└─/dev/sdc2                             8:34   0    2G  0 part
/dev/sdd                                8:48   0   10G  0 disk
├─/dev/sdd1                             8:49   0    8G  0 part
└─/dev/sdd2                             8:50   0    2G  0 part
/dev/sde                                8:64   0   10G  0 disk
├─/dev/sde1                             8:65   0    8G  0 part
└─/dev/sde2                             8:66   0    2G  0 part
/dev/sdf                                8:80   0   10G  0 disk
├─/dev/sdf1                             8:81   0    8G  0 part
└─/dev/sdf2                             8:82   0    2G  0 part

[root@node2 ~]# lsblk -pn
/dev/sda                                8:0    0   37G  0 disk
├─/dev/sda1                             8:1    0    1G  0 part /boot
└─/dev/sda2                             8:2    0   36G  0 part
  ├─/dev/mapper/vg_main-lv_root       252:0    0   32G  0 lvm  /
  └─/dev/mapper/vg_main-lv_swap       252:1    0    4G  0 lvm  [SWAP]
/dev/sdb                                8:16   0  100G  0 disk
└─/dev/sdb1                             8:17   0  100G  0 part
  └─/dev/mapper/VolGroupU01-LogVolU01 252:2    0  100G  0 lvm  /u01
/dev/sdc                                8:32   0   10G  0 disk
├─/dev/sdc1                             8:33   0    8G  0 part
└─/dev/sdc2                             8:34   0    2G  0 part
/dev/sdd                                8:48   0   10G  0 disk
├─/dev/sdd1                             8:49   0    8G  0 part
└─/dev/sdd2                             8:50   0    2G  0 part
/dev/sde                                8:64   0   10G  0 disk
├─/dev/sde1                             8:65   0    8G  0 part
└─/dev/sde2                             8:66   0    2G  0 part
/dev/sdf                                8:80   0   10G  0 disk
├─/dev/sdf1                             8:81   0    8G  0 part
└─/dev/sdf2                             8:82   0    2G  0 part

9. At the end, it created successfully. Attached full log VirtualBox_vagrant_OracleRAC_OL8_20240822.txt

node1: SUCCESS: 2024-08-23 02:59:13: Oracle RAC on Vagrant has been created successfully!

rcitton commented 2 months ago

Not sure why in my case the sleep 60 does not make any difference. In your case instead does the job right? if yes, I can not see a big deal to add it in any case...

DT-234 commented 2 months ago

Not sure why in my case the sleep 60 does not make any difference. In your case instead does the job right? if yes, I can not see a big deal to add it in any case...

Yes, without the sleep 60, the deployment will failed and required to run vagrant provision to complete the deployment.

rcitton commented 2 months ago

sleep 60 is now implemented, please check and in case close the issue