lamw / ghettoVCB

ghettoVCB
MIT License
1.29k stars 365 forks source link

Issues with excluding disks #241

Open jenrm opened 3 years ago

jenrm commented 3 years ago

Hi

I recently upgraded my ESXi host, and subsequently had to upgrade my GhettoVCB to the latest version.

After the upgrade, I had an issue with an existing configuration only backing up certain disks on a particular host.

Looking at the code, there appears to be a bug in the handling of looking at VMDKs when only backing up certain disks. I was able to fix the code with a simple patch:

--- ghettoVCB.sh_orig        2021-06-23 12:14:32.365029079 +1000
+++ ghettoVCB.sh_new     2021-06-23 12:15:00.616927131 +1000
@@ -1128,6 +1128,8 @@
                 VMDK_FILES_TO_BACKUP_NEW=""
                 for k in ${VMDK_FILES_TO_BACKUP}; do
                     VMDK_FILE_TO_BACKUP=$(echo "${k}" | sed -e 's/^[[:blank:]]*//;s/[[:blank:]]*$//')
+                    OLD_IFS="${IFS}"
+                    IFS=":"
                     for j in ${VMDKS}; do
                         VMDK=$(echo "${j}" | awk -F "###" '{print $1}')
                         if [[ "${VMDK_FILE_TO_BACKUP}" == "${VMDK}" ]]; then
@@ -1137,6 +1139,7 @@
                         logger "info" "WARNING: ${VMDK_FILE_TO_BACKUP} not found in VMDKs for ${VM_NAME}"
                         VM_VMDK_FAILED=1
                     done
+                    IFS="${OLD_IFS}"
                 done
                 IFS="${OLD_IFS}"
                 if [ -z "${VMDK_FILES_TO_BACKUP_NEW}" ]; then

Essentially - ensuring the IFS for parsing the data returned from getVMDKs is seperated by : symbols. This seems to have been the case in the previous version of GhettoVCB - but not the current one. Hopefully this helps you improve the code. This resolved the issue for me.

Note: this will still produce an error like: 2021-06-23 02:21:34 -- info: WARNING: vm-host.vmdk not found in VMDKs for vm-host And produce an error - this seems to be another logic error in that if it doesn't find any disk, it automatically assumes there's an error - even though it has several disks to parse. I may end up patching this shortly, too.

jenrm commented 3 years ago

If we modify the logic slightly to only detect on particular disks not being found, we can also get rid of that subsequent error:

--- ghettoVCB.sh_old        2021-06-23 12:30:35.129725919 +1000
+++ ghettoVCB.sh_new     2021-06-23 12:29:39.997910250 +1000
@@ -1127,16 +1127,22 @@
                 IFS=","
                 VMDK_FILES_TO_BACKUP_NEW=""
                 for k in ${VMDK_FILES_TO_BACKUP}; do
+                    VM_VMDK_FAILED=1
                     VMDK_FILE_TO_BACKUP=$(echo "${k}" | sed -e 's/^[[:blank:]]*//;s/[[:blank:]]*$//')
+                    OLD_IFS="${IFS}"
+                    IFS=":"
                     for j in ${VMDKS}; do
                         VMDK=$(echo "${j}" | awk -F "###" '{print $1}')
                         if [[ "${VMDK_FILE_TO_BACKUP}" == "${VMDK}" ]]; then
                             VMDK_FILES_TO_BACKUP_NEW="${VMDK_FILES_TO_BACKUP_NEW},${VMDK_FILE_TO_BACKUP}"
+                            VM_VMDK_FAILED=0
                             break
                         fi
-                        logger "info" "WARNING: ${VMDK_FILE_TO_BACKUP} not found in VMDKs for ${VM_NAME}"
-                        VM_VMDK_FAILED=1
                     done
+                    IFS="${OLD_IFS}"
+                    if [[ ${VM_VMDK_FAILED} -eq 1 ]]; then
+                        logger "info" "WARNING: ${k} not found in VMDKs for ${VM_NAME}"
+                    fi
                 done
                 IFS="${OLD_IFS}"
                 if [ -z "${VMDK_FILES_TO_BACKUP_NEW}" ]; then

Obviously, I'd prefer someone else to verify this works - but this is what I'm running for now for me.

TheNetworkIsDown commented 3 years ago

Could this be a duplicate of https://github.com/lamw/ghettoVCB/issues/221 ? You can try the proposed fix, but it has not been merged yet.

lamw commented 3 years ago

@jenrm Could you the fix in #221 and see if this resolves the problem. I've not merged as we've not gotten confirmation and that would help