Closed infiniteshi closed 1 year ago
A bit update/follow-up to my own question above:
I created a clean slate project as a guinea pig.
Starting point, ran df -h /u01
as the "oracle" user. I had 32GB allocated to start with.
First, ran vagrant plugin install vagrant-disksize
and installed version 0.1.3 successfully.
Then added the following line to the Vagrantfile config.disksize.size='42GB'
, enlarging by 10 GB. Saved the change, ran vagrant halt, and then vagrant up.
The modification seemed to be successful because I received the green message as shown below -
To verify the effect, I ran df -h /u01
as the "oracle" user again. Turns out it still says there's only 32GB, rather than 42GB.
Does anybody know how to fix this?
Thank you!
TL;DR -- run as root:
dnf install -y cloud-utils-growpart
growpart /dev/sda 2
lvresize -l +100%FREE -r /dev/vg_main/lv_root
More details:
You need to resize the disk partition, the logical volume and the filesystem. E.g:
[root@localhost ~]# # Install growpart
[root@localhost ~]# dnf install cloud-utils-growpart
<redacted>
[root@localhost ~]# # Check partitions and resize:
[root@localhost ~]# fdisk -l /dev/sda
Disk /dev/sda: 63 GiB, 67670900736 bytes, 132169728 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xaa93be9e
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 2099199 2097152 1G 83 Linux
/dev/sda2 2099200 77594623 75495424 36G 8e Linux LVM
[root@localhost ~]# growpart /dev/sda 2
CHANGED: partition=2 start=2099200 old: size=75495424 end=77594623 new: size=130070495 end=132169694
[root@localhost ~]# fdisk -l /dev/sda
Disk /dev/sda: 63 GiB, 67670900736 bytes, 132169728 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xaa93be9e
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 2099199 2097152 1G 83 Linux
/dev/sda2 2099200 132169694 130070495 62G 8e Linux LVM
[root@localhost ~]# # The volume group should be updated accordingly:
[root@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vg_main 1 2 0 wz--n- <62.02g 26.02g
[root@localhost ~]# # Allocate the space to the logical volume and resize the filesystem
[root@localhost ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv_root vg_main -wi-ao---- <32.00g
lv_swap vg_main -wi-ao---- 4.00g
[root@localhost ~]# lvresize -l +100%FREE -r /dev/vg_main/lv_root
Size of logical volume vg_main/lv_root changed from <32.00 GiB (8191 extents) to <58.02 GiB (14853 extents).
Logical volume vg_main/lv_root successfully resized.
meta-data=/dev/mapper/vg_main-lv_root isize=512 agcount=4, agsize=2096896 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1 bigtime=0 inobtcount=0
data = bsize=4096 blocks=8387584, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=4095, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 8387584 to 15209472
[root@localhost ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv_root vg_main -wi-ao---- <58.02g
lv_swap vg_main -wi-ao---- 4.00g
[root@localhost ~]# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_main-lv_root 59G 2.2G 56G 4% /
Hi @AmedeeBulle, thank you very much for the instructions. They worked very well on my experimental vm and I successfully enlarged the disksize. But when I was trying to reproduce the success on my real-world vm, I ran the following error at step 1.
[root@oracle-21c-vagrant vagrant]# dnf install -y cloud-utils-growpart error: db5 error(28) from dbenv->open: No space left on device error: cannot open Packages index using db5 - No space left on device (28) error: cannot open Packages database in /var/lib/rpm Error: Error: rpmdb open failed
It seems I don't have enough space to install the plugin?! How can I get out of this catch-22 situation?
Thanks!
Follow-up question:
It came to me that maybe I can delete some dated or useless files to release some space. I've removed audit files under /opt/oracle/admin/ORCLCDB/adump but that wasn't enough. [root@oracle-21c-vagrant /]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 1.2G 0 1.2G 0% /dev tmpfs 1.3G 0 1.3G 0% /dev/shm tmpfs 1.3G 16M 1.2G 2% /run tmpfs 1.3G 0 1.3G 0% /sys/fs/cgroup /dev/mapper/vg_main-lv_root 33G 33G 20K 100% / /dev/sda1 495M 147M 349M 30% /boot vagrant 466G 274G 193G 59% /vagrant tmpfs 248M 0 248M 0% /run/user/1000
What other directories or files are safe to remove in order to release more space (just need enough to install the growpart plugin)?
@infiniteshi - you need to free up some space before you install anything. du
is your friend.
Personally, I'd start with the /var
directory, and delete unwanted log files. (You can also copy/move file to your /vagrant
directory if you wish.)
You can list files, and sort them by size -S
, and delete archived logs (e.g. ones ending in -yyyyMMdd
). For example:
ls -lhSr /var/log/
large files may also be in a user's home directory.
You can run du -sh
. The -h
print sizes in human readable format (e.g., 1K 234M 2G). For example:
sudo du -sh /var/*
but my favorite, is -m
to print in megabytes, and sort them. For example:
sudo du -sm /var/* | sort -n
If you're unsure which files to delete, you can also try:
kernel
s
sudo dnf remove -y --oldinstallonly --setopt installonly_limit=2 kernel
dnf
cache:
sudo dnf clean all
Hi @hussam-qasem, thank you very much for the instructions, and things worked out very well.
The command sudo du -sh /var/* lended me great guidance how the disk space was taken and I chose to remove all the files in the /var/cache/ directory, believing that it'll be re-generated when needed. So I ran the following commands to first backup and then delete:
[root@oracle-21c-vagrant-4 ~]# mkdir /root/backups.var.cache.07062023_0828AM/ [root@oracle-21c-vagrant-4 ~]# cp -avr /var/cache /root/backups.var.cache.07062023_0828AM/
[root@oracle-21c-vagrant-4 cache]# rm -rf /var/cache/*
Then I gained enough wiggle room back to work other things out.
( On a side-note and for what it's worth: I actually have 5 vm instances running in the same virtualbox - I have oracle-21c-vagrant, oracle-21c-vagrant-copy, oracle-21c-vagrant-3, oracle-21c-vagrant-3.0, and oracle-21c-vagrant-4. The one that I initially ran into the "No space left on device" with was oracle-21c-vagrant. But I used oracle-21c-vagrant-4 as a guinea pig to test-run all the commands and it was also the one that I cleaned the /var/cache/ up. As everything went very well on oracle-21c-vagrant-4 and I was about to reproduce the steps on oracle-21c-vagrant, to my surprise the "No space left on device" on oracle-21c-vagrant disappeared by itself. This made me believe that all the 5 instances share the disk space in some way on the virtualbox level. This insight is important for (1) It explains why I ran out of space so easily with relatively small amount of data in the oracle-21c-vagrant DB - it was the multiple vm instances eating up the disk space quickly. And (2) When needing more space, I can shut-down and remove unnecessary vm instances to release considerable space.
)
Describe the issue Recently/Suddenly I ran into the following error message after running vagrant up: ` ... ==> oracle-21c-vagrant: Mounting shared folders... oracle-21c-vagrant: /vagrant =>/OracleDatabase/vagrant-projects/OracleDatabase/21.3.0
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
sed -i '/#VAGRANT-BEGIN/,/#VAGRANT-END/d' /etc/fstab
Stdout from the command:
Stderr from the command:
sed: couldn't open temporary file /etc/sedOKJbll: No space left on device `
Environment Vagrant project: OracleDatabase/21.3.0 Vagrantfile: currently exactly the same as in your project, no customization Vagrant 2.3.4 Ran vagrant plugin list: vagrant-disksize (0.1.3, global) vagrant-vbguest (0.31.0, global)
VirtualBox Guest Addition/inside the vm 7.0.8 VirtualBox on host 7.0.8r156879
Kernel version (for Linux host): [run uname -a]: Darwin 082-WS2695-ML1 22.5.0 Darwin Kernel Version 22.5.0: Mon Apr 24 20:51:50 PDT 2023; root:xnu-8796.121.2~5/RELEASE_X86_64 x86_64
OS Ventura 13.4
Additional information I'd like to try the approaches in this post https://stackoverflow.com/questions/31746907/vagrant-no-space-left-on-device but based on some previous experience where the vm was corrupted and data was lost, I'd like to be cautious this time and learn from you which approach is safe or safer for the particular setup of this project. I really like the vagrant plugin approach vagrant-disksize, but would like to know how to increase the size in the Vagrantfile (if this is feasible)? Any guidelines or precautions?
Thank you!