OpenNebula / one

The open source Cloud & Edge Computing Platform bringing real freedom to your Enterprise Cloud 🚀
http://opennebula.io
Apache License 2.0
1.23k stars 479 forks source link

Recreate bitmaps in libvirt after poweron cycles. #6206

Closed Franco-Sparrow closed 1 year ago

Franco-Sparrow commented 1 year ago

Description

Hello team

Sir @rsmontero we still have problems with the backup solution in the 6.6.1 branch, with suggested commits from previous closed issues, modifications in the backup_qcow2.rb.

I think that this could be fixed, if after made a reset backup all bitmaps and dirty-bitmaps are removed and a new dirty-bitmap one-31-0 is created and all its increments and new dirty-bitmaps, will not collide with older ones from previous backups.

To Reproduce

Using the current backup_qcow2.rb from master branch:

#!/usr/bin/env ruby

# -------------------------------------------------------------------------- #
# Copyright 2002-2023, OpenNebula Project, OpenNebula Systems                #
#                                                                            #
# Licensed under the Apache License, Version 2.0 (the "License"); you may    #
# not use this file except in compliance with the License. You may obtain    #
# a copy of the License at                                                   #
#                                                                            #
# http://www.apache.org/licenses/LICENSE-2.0                                 #
#                                                                            #
# Unless required by applicable law or agreed to in writing, software        #
# distributed under the License is distributed on an "AS IS" BASIS,          #
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.   #
# See the License for the specific language governing permissions and        #
# limitations under the License.                                             #
#--------------------------------------------------------------------------- #

require 'json'
require 'open3'
require 'rexml/document'
require 'base64'
require 'getoptlong'

require_relative './kvm'

#-------------------------------------------------------------------------------
# CONFIGURATION CONSTANTS
#   QEMU_IO_OPEN: options to open command for qemu-io
#   -t <cache_mode>: none, writeback (recommended)
#       wirteback = cache.writeback (fsync() after each write)
#       none      = cache.writeback | cache.direct (use O_DIRECT)
#   -i <io_mode>: io_uring, threads, native (requires cache_mode = none)
#
#   IO_ASYNC: if true issues aio_read commands instead of read
#   OUTSTAND_OPS: number of aio_reads before issuing a aio_flush commnand
#
#   BDRV_MAX_REQUEST is the limit for the sieze of qemu-io operations
#-------------------------------------------------------------------------------
LOG_FILE     = nil
QEMU_IO_OPEN = '-t none -i native -o driver=qcow2'
IO_ASYNC     = false
OUTSTAND_OPS = 8

BDRV_MAX_REQUEST = 2**30

# rubocop:disable Style/ClassVars

#---------------------------------------------------------------------------
# Helper module to execute commands
#---------------------------------------------------------------------------
module Command

    # rubocop:disable Style/HashSyntax
    def log(message)
        return unless LOG_FILE

        File.write(LOG_FILE, "#{Time.now.strftime('%H:%M:%S.%L')} #{message}\n", mode: 'a')
    end
    # rubocop:enable Style/HashSyntax

    def cmd(command, args, opts = {})
        opts.each do |key, value|
            if value.class == Array
                value.each {|v| command << render_opt(key, v) }
            else
                command << render_opt(key, value)
            end
        end

        log("[CMD]: #{command} #{args}")

        out, err, rc = Open3.capture3("#{command} #{args}", :stdin_data => opts[:stdin_data])

        log('[CMD]: DONE')

        if rc.exitstatus != 0
            raise StandardError, "Error executing '#{command} #{args}':\n#{out} #{err}"
        end

        out
    end

    def render_opt(name, value)
        return '' if name == :stdin_data

        if name.length == 1
            opt = " -#{name.to_s.gsub('_', '-')}"
        else
            opt = " --#{name.to_s.gsub('_', '-')}"
        end

        if value && !value.empty?
            opt << ' ' if name[-1] != '='
            opt << value.to_s
        end

        opt
    end

end

#-------------------------------------------------------------------------------
# Setup an NBD server to pull changes, an optional map can be provided
#-------------------------------------------------------------------------------
module Nbd

    @@server = nil

    def self.start_nbd(file, map = '')
        return unless @@server.nil?

        @@socket = "#{File.realpath(file)}.socket"
        @@server = fork do
            args  = ['-r', '-k', @@socket, '-f', 'qcow2', '-t']
            args << '-B' << map unless map.empty?
            args << file

            exec('qemu-nbd', *args)
        end

        sleep(1) # TODO: inotify or poll for @@socket
    end

    def self.stop_nbd
        Process.kill('QUIT', @@server)
        Process.waitpid(@@server)

        File.unlink(@@socket)

        @@server = nil

        sleep(1)
    end

    def self.uri
        "nbd+unix:///?socket=#{@@socket}"
    end

end

# ------------------------------------------------------------------------------
# This class abstracts the information and several methods to operate over
# disk images files
# ------------------------------------------------------------------------------
class QemuImg

    include Command

    def initialize(path)
        @path  = path
        @_info = nil

        @path  = File.realpath(path) if File.exist?(path)
    end

    #---------------------------------------------------------------------------
    # qemu-img command methods
    #---------------------------------------------------------------------------
    QEMU_IMG_COMMANDS = ['convert', 'create', 'rebase', 'info', 'bitmap']

    QEMU_IMG_COMMANDS.each do |command|
        define_method(command.to_sym) do |args = '', opts|
            cmd("qemu-img #{command}", "#{@path} #{args}", opts)
        end
    end

    #---------------------------------------------------------------------------
    #  Image information methods.
    #
    #  Sample output of qemu image info output in json format
    #  {
    #  "backing-filename-format": "qcow2",
    #  "virtual-size": 268435456,
    #  "filename": "disk.0",
    #  "cluster-size": 65536,
    #  "format": "qcow2",
    #  "actual-size": 2510848,
    #  "format-specific": {
    #      "type": "qcow2",
    #      "data": {
    #          "compat": "1.1",
    #          "compression-type": "zlib",
    #          "lazy-refcounts": false,
    #          "bitmaps": [
    #              {
    #                  "flags": [
    #                      "auto"
    #                  ],
    #                  "name": "one-0-5",
    #                  "granularity": 65536
    #              },
    #              {
    #                  "flags": [
    #                      "auto"
    #                  ],
    #                  "name": "one-0-4",
    #                  "granularity": 65536
    #              }
    #          ],
    #          "refcount-bits": 16,
    #          "corrupt": false,
    #          "extended-l2": false
    #      }
    #  },
    #  "full-backing-filename": "/var/lib/one/datastores/1/e948982",
    #  "backing-filename": "/var/lib/one/datastores/1/e948982",
    #  "dirty-flag": false
    # }
    #---------------------------------------------------------------------------
    def [](key)
        if !@_info
            out    = info(:output => 'json', :force_share => '')
            @_info = JSON.parse(out)
        end

        @_info[key]
    end

    def bitmaps
        self['format-specific']['data']['bitmaps']
    rescue StandardError
        []
    end

    #---------------------------------------------------------------------------
    # Pull changes since last checkpoint through the NBD server in this image
    #   1. Get list of dirty blocks
    #   2. Create increment qcow2 using NBD as backing file
    #   3. Pull changes reading (copy-on-write)
    #
    # Note: Increment files neeed rebase to reconstruct the increment chain
    #---------------------------------------------------------------------------
    def pull_changes(uri, map)
        # ----------------------------------------------------------------------
        # Get extents from NBD server
        # ----------------------------------------------------------------------
        exts = if !map || map.empty?
                   # TODO: change pattern to include zero
                   extents(uri, '', 'data')
               else
                   extents(uri, map, 'dirty')
               end

        rc, msg = create(:f => 'qcow2', :F => 'raw', :b => uri)

        return [false, msg] unless rc

        # ----------------------------------------------------------------------
        # Create a qemu-io script to pull changes
        # ----------------------------------------------------------------------
        io_script = "open -C #{QEMU_IO_OPEN} #{@path}\n"
        index     = -1

        exts.each do |e|
            ext_length = Integer(e['length'])
            new_exts   = []

            if ext_length > BDRV_MAX_REQUEST
                ext_offset = Integer(e['offset'])

                loop do
                    index += 1

                    blk_length = [ext_length, BDRV_MAX_REQUEST].min

                    new_exts << {
                        'offset' => ext_offset,
                        'length' => blk_length,
                        'index'  => index
                    }

                    ext_offset += BDRV_MAX_REQUEST
                    ext_length -= BDRV_MAX_REQUEST

                    break if ext_length <= 0
                end
            else
                index += 1

                new_exts << {
                    'offset' => e['offset'],
                    'length' => e['length'],
                    'index'  => index
                }
            end

            new_exts.each do |i|
                if IO_ASYNC
                    io_script << "aio_read -q #{i['offset']} #{i['length']}\n"
                    io_script << "aio_flush\n" if (i['index']+1)%OUTSTAND_OPS == 0
                else
                    io_script << "read -q #{i['offset']} #{i['length']}\n"
                end
            end
        end

        io_script << "aio_flush\n" if IO_ASYNC

        cmd('qemu-io', '', :stdin_data => io_script)
    end

    private

    #---------------------------------------------------------------------------
    # Gets the dirty extent information from the given map using an NBD server
    #---------------------------------------------------------------------------
    def extents(uri, map, description)
        opts = { :json => '' }

        if !map.empty?
            opts[:map=] = map
        else
            opts[:map]  = ''
        end

        out = cmd('nbdinfo', uri, opts)

        bmap = JSON.parse(out)
        exts = []

        bmap.each do |e|
            next if !e || e['description'] != description

            exts << e
        end

        exts
    end

end

# ------------------------------------------------------------------------------
# This class represents a KVM domain, includes information about the associated
# OpenNebula VM
# ------------------------------------------------------------------------------
class KVMDomain

    include TransferManager::KVM
    include Command

    attr_reader :parent_id, :backup_id, :checkpoint

    #---------------------------------------------------------------------------
    # @param vm[REXML::Document] OpenNebula XML VM information
    # @param opts[Hash] Vm attributes:
    #   - :vm_dir VM folder (/var/lib/one/datastores/<DS_ID>/<VM_ID>)
    #---------------------------------------------------------------------------
    def initialize(vm_xml, opts = {})
        @vm  = REXML::Document.new(vm_xml).root

        @vid = @vm.elements['ID'].text
        @dom = @vm.elements['DEPLOY_ID'].text

        @backup_id = 0
        @parent_id = -1

        @checkpoint = false

        mode = @vm.elements['BACKUPS/BACKUP_CONFIG/MODE']

        if mode
            case mode.text
            when 'FULL'
                @backup_id = 0
                @parent_id = -1

                @checkpoint = false
            when 'INCREMENT'
                li = @vm.elements['BACKUPS/BACKUP_CONFIG/LAST_INCREMENT_ID'].text.to_i

                @backup_id = li + 1
                @parent_id = li

                @checkpoint = true
            end
        end

        @vm_dir  = opts[:vm_dir]
        @tmp_dir = "#{opts[:vm_dir]}/tmp"
        @bck_dir = "#{opts[:vm_dir]}/backup"

        @socket  = "#{opts[:vm_dir]}/backup.socket"

        # State variables for domain operations
        @ongoing = false
        @frozen  = nil
    end

    # "pause" the VM before excuting any FS related operation. The modes are:
    #   - NONE (no operation)
    #   - AGENT (domfsfreeze - domfsthaw)
    #   - SUSPEND (suspend - resume)
    #
    # @return [String, String] freeze and thaw commands
    def fsfreeze
        @frozen = begin
            @vm.elements['/VM/BACKUPS/BACKUP_CONFIG/FS_FREEZE'].text.upcase
        rescue StandardError
            'NONE'
        end

        case @frozen
        when 'AGENT'
            cmd("#{virsh} domfsfreeze", @dom)
        when 'SUSPEND'
            cmd("#{virsh} suspend", @dom)
        end
    end

    def fsthaw
        return unless @frozen

        case @frozen
        when 'AGENT'
            cmd("#{virsh} domfsthaw", @dom)
        when 'SUSPEND'
            cmd("#{virsh} resume", @dom)
        end
    ensure
        @frozen = nil
    end

    #---------------------------------------------------------------------------
    # Re-define the parent_id checkpoint if not included in the checkpoint-list.
    # If the checkpoint is not present in storage the methods will fail.
    #
    #   @param[String] List of disks to include in the checkpoint
    #   @param[Integer] id of the checkpoint to define
    #---------------------------------------------------------------------------
    def define_checkpoint(disks_s)
        return if @parent_id == -1

        #-----------------------------------------------------------------------
        #  Check if the parent_id checkpoint is already defined for this domain
        #-----------------------------------------------------------------------
        out = cmd("#{virsh} checkpoint-list", @dom, :name => '')
        out.strip!

        check_ids = []

        out.each_line do |l|
            m = l.match(/(one-[0-9]+)-([0-9]+)/)
            next unless m

            check_ids << m[2].to_i
        end

        # Remove current checkpoint (e.g. a previous failed backup operation)
        if check_ids.include? @backup_id
            cpname = "one-#{@vid}-#{@backup_id}"

            begin
                cmd("#{virsh} checkpoint-delete", @dom, :checkpointname => cpname)
            rescue StandardError
                cmd("#{virsh} checkpoint-delete", @dom,
                    :checkpointname => cpname, :metadata => '')
            end
        end

        return if check_ids.include? @parent_id

        #-----------------------------------------------------------------------
        # Try to re-define checkpoint, will fail if not present in storage.
        # Can be queried using qemu-monitor
        #
        # out  = cmd("#{virsh} qemu-monitor-command", @dom,
        #           :cmd => '{"execute": "query-block","arguments": {}}')
        # outh = JSON.parse(out)
        #
        # outh['return'][0]['inserted']['dirty-bitmaps']
        #   => [{"name"=>"one-0-2", "recording"=>true, "persistent"=>true,
        #        "busy"=>false, "granularity"=>65536, "count"=>327680}]
        #-----------------------------------------------------------------------
        disks = disks_s.split ':'
        tgts  = []

        @vm.elements.each 'TEMPLATE/DISK' do |d|
            did = d.elements['DISK_ID'].text

            next unless disks.include? did

            tgts << d.elements['TARGET'].text
        end

        return if tgts.empty?

        disks = '<disks>'
        tgts.each {|tgt| disks << "<disk name='#{tgt}'/>" }
        disks << '</disks>'

        checkpoint_xml = <<~EOS
            <domaincheckpoint>
                <name>one-#{@vid}-#{@parent_id}</name>
                <creationTime>#{Time.now.to_i}</creationTime>
                #{disks}
            </domaincheckpoint>
        EOS

        cpath = "#{@tmp_dir}/checkpoint.xml"

        File.open(cpath, 'w') {|f| f.write(checkpoint_xml) }

        cmd("#{virsh} checkpoint-create", @dom,
            :xmlfile => cpath, :redefine => '')
    end

    #---------------------------------------------------------------------------
    # Cleans defined checkpoints up to the last two. This way we can retry
    # the backup operation in case it fails
    #---------------------------------------------------------------------------
    def clean_checkpoints(all = false)
        return unless @checkpoint

        out = cmd("#{virsh} checkpoint-list", @dom, :name => '')
        out.strip!

        out.each_line do |l|
            m = l.match(/(one-[0-9]+)-([0-9]+)/)
            next if !m || (!all && m[2].to_i >= @parent_id)

            cmd("#{virsh} checkpoint-delete", "#{@dom} #{m[1]}-#{m[2]}")
        end
    end

    #---------------------------------------------------------------------------
    #  Make a live backup for the VM.
    #   @param [Array] ID of disks that will take part on the backup
    #   @param [Boolean] if true do not generate checkpoint
    #---------------------------------------------------------------------------
    def backup_nbd_live(disks_s)
        init  = Time.now
        disks = disks_s.split ':'

        fsfreeze

        start_backup(disks, @backup_id, @parent_id, @checkpoint)

        fsthaw

        @vm.elements.each 'TEMPLATE/DISK' do |d|
            did = d.elements['DISK_ID'].text
            tgt = d.elements['TARGET'].text

            next unless disks.include? did

            ipath = "#{@bck_dir}/disk.#{did}.#{@backup_id}"
            idisk = QemuImg.new(ipath)

            if @parent_id == -1
                map = ''
            else
                map = "qemu:dirty-bitmap:backup-#{tgt}"
            end

            idisk.pull_changes(mkuri(tgt), map)
        end

        log("[BCK]: Incremental backup done in #{Time.now - init}s")
    ensure
        fsthaw
        stop_backup
    end

    def backup_full_live(disks_s)
        init  = Time.now
        disks = disks_s.split ':'
        dspec = []
        qdisk = {}

        disk_xml = '<disks>'

        @vm.elements.each 'TEMPLATE/DISK' do |d|
            did = d.elements['DISK_ID'].text
            tgt = d.elements['TARGET'].text

            next unless disks.include? did

            overlay = "#{@tmp_dir}/overlay_#{did}.qcow2"

            File.open(overlay, 'w') {}

            dspec << "#{tgt},file=#{overlay}"

            disk_xml << "<disk name='#{tgt}'/>"

            qdisk[did] = QemuImg.new("#{@vm_dir}/disk.#{did}")
        end

        disk_xml << '</disks>'

        opts = {
            :name   => "one-#{@vid}-backup",
            :disk_only => '',
            :atomic    => '',
            :diskspec  => dspec
        }

        checkpoint_xml = <<~EOS
            <domaincheckpoint>
               <name>one-#{@vid}-0</name>
               #{disk_xml}
             </domaincheckpoint>
        EOS

        cpath = "#{@tmp_dir}/checkpoint.xml"

        File.open(cpath, 'w') {|f| f.write(checkpoint_xml) }

        fsfreeze

        cmd("#{virsh} snapshot-create-as", @dom, opts)

        cmd("#{virsh} checkpoint-create", @dom, :xmlfile => cpath) if @checkpoint

        fsthaw

        qdisk.each do |did, disk|
            disk.convert("#{@bck_dir}/disk.#{did}.0", :m => '4', :O => 'qcow2', :U => '')
        end

        log("[BCK]: Full backup done in #{Time.now - init}s")
    ensure
        fsthaw
    end

    def stop_backup_full_live(disks_s)
        disks = disks_s.split ':'

        @vm.elements.each 'TEMPLATE/DISK' do |d|
            did = d.elements['DISK_ID'].text
            tgt = d.elements['TARGET'].text

            next unless disks.include? did

            opts = {
                :base   => "#{@vm_dir}/disk.#{did}",
                :active => '',
                :pivot  => '',
                :keep_relative => ''
            }

            cmd("#{virsh} blockcommit", "#{@dom} #{tgt}", opts)
        end

        cmd("#{virsh} snapshot-delete", @dom.to_s,
            :snapshotname => "one-#{@vid}-backup",
            :metadata     => '')
    end

    #---------------------------------------------------------------------------
    #  Make a backup for the VM. (see make_backup_live)
    #---------------------------------------------------------------------------
    def backup_nbd(disks_s)
        init  = Time.now
        disks = disks_s.split ':'

        if @parent_id == -1
            nbd_map = ''
            map     = ''
        else
            nbd_map = "one-#{@vid}-#{@parent_id}"
            map     = "qemu:dirty-bitmap:#{nbd_map}"
        end

        dids = []

        @vm.elements.each 'TEMPLATE/DISK' do |d|
            did = d.elements['DISK_ID'].text

            dids << did if disks.include? did
        end

        dids.each do |d|
            idisk = QemuImg.new("#{@bck_dir}/disk.#{d}.#{@backup_id}")

            Nbd.start_nbd("#{@vm_dir}/disk.#{d}", nbd_map)

            idisk.pull_changes(Nbd.uri, map)
        ensure
            Nbd.stop_nbd
        end

        dids.each do |d|
            idisk = QemuImg.new("#{@vm_dir}/disk.#{d}")

            idisk.bitmaps.each do |b|
                next if b['name'] == "one-#{@vid}-#{@parent_id}"

                idisk.bitmap(b['name'], :remove => '')
            end

            idisk.bitmap("one-#{@vid}-#{@backup_id}", :add => '')
        end if @checkpoint

        log("[BCK]: Incremental backup done in #{Time.now - init}s")
    end

    def backup_full(disks_s)
        init  = Time.now
        disks = disks_s.split ':'

        @vm.elements.each 'TEMPLATE/DISK' do |d|
            did = d.elements['DISK_ID'].text

            next unless disks.include? did

            sdisk = QemuImg.new("#{@vm_dir}/disk.#{did}")
            ddisk = "#{@bck_dir}/disk.#{did}.0"

            sdisk.convert(ddisk, :m => '4', :O => 'qcow2', :U => '')

            next unless @checkpoint

            bms = sdisk.bitmaps
            bms.each {|bm| sdisk.bitmap(bm['name'], :remove => '') } unless bms.nil?

            sdisk.bitmap("one-#{@vid}-0", :add => '')
        end

        log("[BCK]: Full backup done in #{Time.now - init}s")
    end

    private

    # Generate nbd URI to query block bitmaps for a device
    def mkuri(target)
        "nbd+unix:///#{target}?socket=#{@socket}"
    end

    #---------------------------------------------------------------------------
    # Start a Backup operation on the domain (See make_backup_live)
    #---------------------------------------------------------------------------
    def start_backup(disks, bck_id, pid, checkpoint)
        parent = "one-#{@vid}-#{pid}"
        bname  = "one-#{@vid}-#{bck_id}"

        parent_xml = "<incremental>#{parent}</incremental>" if pid != -1

        backup_xml = <<~EOS
            <domainbackup mode='pull'>
              #{parent_xml}
              <server transport='unix' socket='#{@socket}'/>
              <disks>
        EOS

        checkpoint_xml = <<~EOS
            <domaincheckpoint>
              <name>#{bname}</name>
              <disks>
        EOS

        @vm.elements.each 'TEMPLATE/DISK' do |d|
            did = d.elements['DISK_ID'].text
            tgt = d.elements['TARGET'].text
            szm = d.elements['SIZE'].text

            next unless disks.include? did

            spath = "#{@tmp_dir}/scracth.#{did}.qcow2"

            simg = QemuImg.new(spath)
            simg.create("#{szm}M", :f => 'qcow2')

            backup_xml << <<~EOS
                <disk name='#{tgt}' backup='yes' type='file'>
                  <scratch file='#{spath}'/>
                </disk>
            EOS

            checkpoint_xml << "<disk name='#{tgt}'/>"
        end

        checkpoint_xml << <<~EOS
              </disks>
            </domaincheckpoint>
        EOS

        backup_xml << <<~EOS
              </disks>
            </domainbackup>
        EOS

        backup_path = "#{@tmp_dir}/backup.xml"
        check_path  = "#{@tmp_dir}/checkpoint.xml"

        File.open(backup_path, 'w') {|f| f.write(backup_xml) }

        File.open(check_path, 'w') {|f| f.write(checkpoint_xml) }

        opts = { :reuse_external => '', :backupxml => backup_path }
        opts[:checkpointxml] = check_path if checkpoint

        cmd("#{virsh} backup-begin", @dom, opts)

        @ongoing = true
    end

    #---------------------------------------------------------------------------
    # Stop an ongoing Backup operation on the domain
    #---------------------------------------------------------------------------
    def stop_backup
        return unless @ongoing

        cmd("#{virsh} domjobabort", @dom, {})
    ensure
        @ongoing = false
    end

end

opts = GetoptLong.new(
    ['--disk', '-d', GetoptLong::REQUIRED_ARGUMENT],
    ['--vxml', '-x', GetoptLong::REQUIRED_ARGUMENT],
    ['--path', '-p', GetoptLong::REQUIRED_ARGUMENT],
    ['--live', '-l', GetoptLong::NO_ARGUMENT],
    ['--stop', '-s', GetoptLong::NO_ARGUMENT]
)

begin
    path = disk = vxml = ''
    live = stop = false

    opts.each do |opt, arg|
        case opt
        when '--disk'
            disk = arg
        when '--path'
            path = arg
        when '--live'
            live = true
        when '--stop'
            stop = true
        when '--vxml'
            vxml = arg
        end
    end

    vm = KVMDomain.new(Base64.decode64(File.read(vxml)), :vm_dir => path)

    #---------------------------------------------------------------------------
    #  Stop operation. Only for full backups in live mode. It blockcommits
    #  changes and cleans snapshot.
    #---------------------------------------------------------------------------
    if stop
        vm.stop_backup_full_live(disk) if vm.parent_id == -1 && live
        exit(0)
    end

    #---------------------------------------------------------------------------
    #  Backup operation
    #   - (live - full) Creates a snapshot to copy the disks via qemu-convert
    #     all previous defined checkpoints are cleaned.
    #   - (live - increment) starts a backup operation in libvirt and pull changes
    #     through NBD server using qemu-io copy-on-read feature
    #   - (poff - full) copy disks via qemu-convert
    #   - (poff - incremental) starts qemu-nbd server to pull changes from the
    #     last checkpoint
    #---------------------------------------------------------------------------
    if live
        if vm.parent_id == -1
            vm.clean_checkpoints(true)

            vm.backup_full_live(disk)
        else
            vm.define_checkpoint(disk)

            vm.backup_nbd_live(disk)

            vm.clean_checkpoints
        end
    else
        if vm.parent_id == -1
            vm.backup_full(disk)
        else
            vm.backup_nbd(disk)
        end
    end
rescue StandardError => e
    puts e.message
    exit(-1)
end

# rubocop:enable Style/ClassVars

New VM for the tests

There should be no bitmaps nor dirty-bitmaps for the new created VM 31.

The LAST_INCREMENT_ID has the right value (-1):

onevm updateconf 31

# [...]
BACKUP_CONFIG=[
  BACKUP_VOLATILE="NO",
  FS_FREEZE="AGENT",
  INCREMENTAL_BACKUP_ID="-1",
  KEEP_LAST="7",
  LAST_INCREMENT_ID="-1",
  MODE="INCREMENT" ]

Backup #1: VM is RUNNING and no previous backups. A new bitmap one-31-0 should be created. A new backup image 71 should be created.

onevm backup -d 100 31

There should be a dirty-bitmap one-31-0.

The LAST_INCREMENT_ID has the right value:

onevm updateconf 31

# [...]
BACKUP_CONFIG=[
  BACKUP_VOLATILE="NO",
  FS_FREEZE="AGENT",
  INCREMENTAL_BACKUP_ID="75",
  KEEP_LAST="7",
  LAST_INCREMENT_ID="0",
  MODE="INCREMENT" ]
oneimage show 75

# [...]
BACKUP INCREMENTS
 ID PID T SIZE                DATE SOURCE
  0  -1 F 118M      05/09 18:29:37 c182ef

Backup #2: VM is RUNNING and there should exist the bitmap one-31-0. A new bitmap one-31-1 should be created.

onevm backup -d 100 31

There should be two dirty-bitmaps: one-31-0 and one-31-1, with now bitmaps.

The LAST_INCREMENT_ID has the right value:

onevm updateconf 31

# [...]
BACKUP_CONFIG=[
  BACKUP_VOLATILE="NO",
  FS_FREEZE="AGENT",
  INCREMENTAL_BACKUP_ID="75",
  KEEP_LAST="7",
  LAST_INCREMENT_ID="1",
  MODE="INCREMENT" ]
oneimage show 75

# [...]
BACKUP INCREMENTS
 ID PID T SIZE                DATE SOURCE
  0  -1 F 118M      05/09 18:29:37 c182ef
  1   0 I 1M        05/09 18:33:48 10a6df

Backup #3: VM is RUNNING and there should exist the dirty-bitmaps one-31-0 and one-31-1. A new dirty-bitmap one-31-2 should be created.

onevm backup -d 100 31

There should be dirty-bitmaps, but there only two, the one-31-1 and one-31-2, while the one-31-0 was rotated.

The LAST_INCREMENT_ID has the right value:

onevm updateconf 31

# [...]
BACKUP_CONFIG=[
  BACKUP_VOLATILE="NO",
  FS_FREEZE="AGENT",
  INCREMENTAL_BACKUP_ID="75",
  KEEP_LAST="7",
  LAST_INCREMENT_ID="2",
  MODE="INCREMENT" ]
oneimage show 75

# [...]
BACKUP INCREMENTS
 ID PID T SIZE                DATE SOURCE
  0  -1 F 118M      05/09 18:29:37 c182ef
  1   0 I 1M        05/09 18:33:48 10a6df
  2   1 I 1M        05/09 18:45:00 afa460

Changing the VM state from RUNNING to POWEROFF:

onevm poweroff 31

Changing the VM state from POWEROFF to RUNNING:

onevm resume 31

Backup #4 with --reset: VM is RUNNING . It should clean all previews dirty-bitmaps and create a new one one-31-0 and a new backup image 76.

onevm backup --reset -d 100 31

Where is the bitmap one-31-0 and why are still there the bitmaps one-31-1 and one-31-2? I created a new dirty-bitmap one-31-0, but there remains the old dirty-bitmaps from previous backups. Next backup will fail.

The LAST_INCREMENT_ID has the right value:

onevm updateconf 31

# [...]
BACKUP_CONFIG=[
  BACKUP_VOLATILE="NO",
  FS_FREEZE="AGENT",
  INCREMENTAL_BACKUP_ID="76",
  KEEP_LAST="7",
  LAST_INCREMENT_ID="0",
  MODE="INCREMENT" ]
oneimage show 76

# [...]
BACKUP INCREMENTS
 ID PID T SIZE                DATE SOURCE
  0  -1 F 118M      05/09 18:55:06 7ec4d9

Backup #5: VM is RUNNING and there should only exist the dirty-bitmap one-31-0, but there are also old ones (one-31-1 and one-31-2).

onevm backup -d 100 31

The backup failed:

Tue May  9 19:05:33 2023 [Z0][VM][I]: New LCM state is BACKUP
Tue May  9 19:05:36 2023 [Z0][VMM][I]: Command execution failed (exit code: 255): /var/lib/one/remotes/tm/shared/prebackup_live ON-DEV-N1-kvm:/var/lib/one//datastores/0/31 0: 382e6a9b-74f0-4c3a-9fdf-75e27d4248ef 31 0
Tue May  9 19:05:36 2023 [Z0][VMM][E]: prebackup_live: Command failed:
Tue May  9 19:05:36 2023 [Z0][VMM][I]: export LANG=C
Tue May  9 19:05:36 2023 [Z0][VMM][I]: export LC_ALL=C
Tue May  9 19:05:36 2023 [Z0][VMM][I]: set -ex -o pipefail
Tue May  9 19:05:36 2023 [Z0][VMM][I]:
Tue May  9 19:05:36 2023 [Z0][VMM][I]: # ----------------------------------
Tue May  9 19:05:36 2023 [Z0][VMM][I]: # Prepare the tmp and backup folders
Tue May  9 19:05:36 2023 [Z0][VMM][I]: # ----------------------------------
Tue May  9 19:05:36 2023 [Z0][VMM][I]: [ -d /var/lib/one//datastores/0/31/tmp ] && rm -rf /var/lib/one//datastores/0/31/tmp
Tue May  9 19:05:36 2023 [Z0][VMM][I]:
Tue May  9 19:05:36 2023 [Z0][VMM][I]: [ -d /var/lib/one//datastores/0/31/backup ] && rm -rf /var/lib/one//datastores/0/31/backup
Tue May  9 19:05:36 2023 [Z0][VMM][I]:
Tue May  9 19:05:36 2023 [Z0][VMM][I]: mkdir -p /var/lib/one//datastores/0/31/tmp
Tue May  9 19:05:36 2023 [Z0][VMM][I]:
Tue May  9 19:05:36 2023 [Z0][VMM][I]: mkdir -p /var/lib/one//datastores/0/31/backup
Tue May  9 19:05:36 2023 [Z0][VMM][I]:
Tue May  9 19:05:36 2023 [Z0][VMM][I]: echo "PFZNPjxJRD4zMTwvSUQ+PFVJRD4wPC9VSUQ+PEdJRD4wPC9HSUQ+PFVOQU1F
Tue May  9 19:05:36 2023 [Z0][VMM][I]: Pm9uZWFkbWluPC9VTkFNRT48R05BTUU+b25lYWRtaW48L0dOQU1FPjxOQU1F
Tue May  9 19:05:36 2023 [Z0][VMM][I]: PmFscGluZTwvTkFNRT48UEVSTUlTU0lPTlM+PE9XTkVSX1U+MTwvT1dORVJf
Tue May  9 19:05:36 2023 [Z0][VMM][I]: VT48T1dORVJfTT4xPC9PV05FUl9NPjxPV05FUl9BPjA8L09XTkVSX0E+PEdS
Tue May  9 19:05:36 2023 [Z0][VMM][I]: T1VQX1U+MDwvR1JPVVBfVT48R1JPVVBfTT4wPC9HUk9VUF9NPjxHUk9VUF9B
Tue May  9 19:05:36 2023 [Z0][VMM][I]: PjA8L0dST1VQX0E+PE9USEVSX1U+MDwvT1RIRVJfVT48T1RIRVJfTT4wPC9P
Tue May  9 19:05:36 2023 [Z0][VMM][I]: VEhFUl9NPjxPVEhFUl9BPjA8L09USEVSX0E+PC9QRVJNSVNTSU9OUz48TEFT
Tue May  9 19:05:36 2023 [Z0][VMM][I]: VF9QT0xMPjA8L0xBU1RfUE9MTD48U1RBVEU+MzwvU1RBVEU+PExDTV9TVEFU
Tue May  9 19:05:36 2023 [Z0][VMM][I]: RT42OTwvTENNX1NUQVRFPjxQUkVWX1NUQVRFPjM8L1BSRVZfU1RBVEU+PFBS
Tue May  9 19:05:36 2023 [Z0][VMM][I]: RVZfTENNX1NUQVRFPjY5PC9QUkVWX0xDTV9TVEFURT48UkVTQ0hFRD4wPC9S
Tue May  9 19:05:36 2023 [Z0][VMM][I]: RVNDSEVEPjxTVElNRT4xNjgzNjcxMDUzPC9TVElNRT48RVRJTUU+MDwvRVRJ
Tue May  9 19:05:36 2023 [Z0][VMM][I]: TUU+PERFUExPWV9JRD4zODJlNmE5Yi03NGYwLTRjM2EtOWZkZi03NWUyN2Q0
Tue May  9 19:05:36 2023 [Z0][VMM][I]: MjQ4ZWY8L0RFUExPWV9JRD48TU9OSVRPUklORy8+PFRFTVBMQVRFPjxBVVRP
Tue May  9 19:05:36 2023 [Z0][VMM][I]: TUFUSUNfRFNfUkVRVUlSRU1FTlRTPjwhW0NEQVRBWygiQ0xVU1RFUlMvSUQi
Tue May  9 19:05:36 2023 [Z0][VMM][I]: IEA+IDApXV0+PC9BVVRPTUFUSUNfRFNfUkVRVUlSRU1FTlRTPjxBVVRPTUFU
Tue May  9 19:05:36 2023 [Z0][VMM][I]: SUNfTklDX1JFUVVJUkVNRU5UUz48IVtDREFUQVsoIkNMVVNURVJTL0lEIiBA
Tue May  9 19:05:36 2023 [Z0][VMM][I]: PiAwKV1dPjwvQVVUT01BVElDX05JQ19SRVFVSVJFTUVOVFM+PEFVVE9NQVRJ
Tue May  9 19:05:36 2023 [Z0][VMM][I]: Q19SRVFVSVJFTUVOVFM+PCFbQ0RBVEFbKENMVVNURVJfSUQgPSAwKSAmICEo
Tue May  9 19:05:36 2023 [Z0][VMM][I]: UFVCTElDX0NMT1VEID0gWUVTKSAmICEoUElOX1BPTElDWSA9IFBJTk5FRCld
Tue May  9 19:05:36 2023 [Z0][VMM][I]: XT48L0FVVE9NQVRJQ19SRVFVSVJFTUVOVFM+PENPTlRFWFQ+PERJU0tfSUQ+
Tue May  9 19:05:36 2023 [Z0][VMM][I]: PCFbQ0RBVEFbMV1dPjwvRElTS19JRD48TkVUV09SSz48IVtDREFUQVtZRVNd
Tue May  9 19:05:36 2023 [Z0][VMM][I]: XT48L05FVFdPUks+PFNTSF9QVUJMSUNfS0VZPjwhW0NEQVRBW11dPjwvU1NI
Tue May  9 19:05:36 2023 [Z0][VMM][I]: X1BVQkxJQ19LRVk+PFRBUkdFVD48IVtDREFUQVtoZGFdXT48L1RBUkdFVD48
Tue May  9 19:05:36 2023 [Z0][VMM][I]: L0NPTlRFWFQ+PENQVT48IVtDREFUQVsxXV0+PC9DUFU+PERJU0s+PEFMTE9X
Tue May  9 19:05:36 2023 [Z0][VMM][I]: X09SUEhBTlM+PCFbQ0RBVEFbRk9STUFUXV0+PC9BTExPV19PUlBIQU5TPjxD
Tue May  9 19:05:36 2023 [Z0][VMM][I]: TE9ORT48IVtDREFUQVtZRVNdXT48L0NMT05FPjxDTE9ORV9UQVJHRVQ+PCFb
Tue May  9 19:05:36 2023 [Z0][VMM][I]: Q0RBVEFbU1lTVEVNXV0+PC9DTE9ORV9UQVJHRVQ+PENMVVNURVJfSUQ+PCFb
Tue May  9 19:05:36 2023 [Z0][VMM][I]: Q0RBVEFbMF1dPjwvQ0xVU1RFUl9JRD48REFUQVNUT1JFPjwhW0NEQVRBW2lt
Tue May  9 19:05:36 2023 [Z0][VMM][I]: YWdlc11dPjwvREFUQVNUT1JFPjxEQVRBU1RPUkVfSUQ+PCFbQ0RBVEFbMV1d
Tue May  9 19:05:36 2023 [Z0][VMM][I]: PjwvREFUQVNUT1JFX0lEPjxERVZfUFJFRklYPjwhW0NEQVRBW3ZkXV0+PC9E
Tue May  9 19:05:36 2023 [Z0][VMM][I]: RVZfUFJFRklYPjxESVNLX0lEPjwhW0NEQVRBWzBdXT48L0RJU0tfSUQ+PERJ
Tue May  9 19:05:36 2023 [Z0][VMM][I]: U0tfU05BUFNIT1RfVE9UQUxfU0laRT48IVtDREFUQVswXV0+PC9ESVNLX1NO
Tue May  9 19:05:36 2023 [Z0][VMM][I]: QVBTSE9UX1RPVEFMX1NJWkU+PERJU0tfVFlQRT48IVtDREFUQVtGSUxFXV0+
Tue May  9 19:05:36 2023 [Z0][VMM][I]: PC9ESVNLX1RZUEU+PERSSVZFUj48IVtDREFUQVtxY293Ml1dPjwvRFJJVkVS
Tue May  9 19:05:36 2023 [Z0][VMM][I]: PjxGT1JNQVQ+PCFbQ0RBVEFbcWNvdzJdXT48L0ZPUk1BVD48SU1BR0U+PCFb
Tue May  9 19:05:36 2023 [Z0][VMM][I]: Q0RBVEFbQWxwaW5lIExpbnV4IDMuMTddXT48L0lNQUdFPjxJTUFHRV9JRD48
Tue May  9 19:05:36 2023 [Z0][VMM][I]: IVtDREFUQVsyNl1dPjwvSU1BR0VfSUQ+PElNQUdFX1NUQVRFPjwhW0NEQVRB
Tue May  9 19:05:36 2023 [Z0][VMM][I]: WzJdXT48L0lNQUdFX1NUQVRFPjxMTl9UQVJHRVQ+PCFbQ0RBVEFbTk9ORV1d
Tue May  9 19:05:36 2023 [Z0][VMM][I]: PjwvTE5fVEFSR0VUPjxPUklHSU5BTF9TSVpFPjwhW0NEQVRBWzI1Nl1dPjwv
Tue May  9 19:05:36 2023 [Z0][VMM][I]: T1JJR0lOQUxfU0laRT48UkVBRE9OTFk+PCFbQ0RBVEFbTk9dXT48L1JFQURP
Tue May  9 19:05:36 2023 [Z0][VMM][I]: TkxZPjxTQVZFPjwhW0NEQVRBW05PXV0+PC9TQVZFPjxTSVpFPjwhW0NEQVRB
Tue May  9 19:05:36 2023 [Z0][VMM][I]: WzI1Nl1dPjwvU0laRT48U09VUkNFPjwhW0NEQVRBWy92YXIvbGliL29uZS8v
Tue May  9 19:05:36 2023 [Z0][VMM][I]: ZGF0YXN0b3Jlcy8xLzYzNjhiYzkyNWQxODhmMWRjMGU3MTE1ODQyNTBjYzUw
Tue May  9 19:05:36 2023 [Z0][VMM][I]: XV0+PC9TT1VSQ0U+PFRBUkdFVD48IVtDREFUQVt2ZGFdXT48L1RBUkdFVD48
Tue May  9 19:05:36 2023 [Z0][VMM][I]: VE1fTUFEPjwhW0NEQVRBW3NoYXJlZF1dPjwvVE1fTUFEPjxUWVBFPjwhW0NE
Tue May  9 19:05:36 2023 [Z0][VMM][I]: QVRBW0ZJTEVdXT48L1RZUEU+PC9ESVNLPjxHUkFQSElDUz48TElTVEVOPjwh
Tue May  9 19:05:36 2023 [Z0][VMM][I]: W0NEQVRBWzAuMC4wLjBdXT48L0xJU1RFTj48UE9SVD48IVtDREFUQVs1OTMx
Tue May  9 19:05:36 2023 [Z0][VMM][I]: XV0+PC9QT1JUPjxUWVBFPjwhW0NEQVRBW3ZuY11dPjwvVFlQRT48L0dSQVBI
Tue May  9 19:05:36 2023 [Z0][VMM][I]: SUNTPjxNRU1PUlk+PCFbQ0RBVEFbMTI4XV0+PC9NRU1PUlk+PE5JQ19ERUZB
Tue May  9 19:05:36 2023 [Z0][VMM][I]: VUxUPjxNT0RFTD48IVtDREFUQVt2aXJ0aW9dXT48L01PREVMPjwvTklDX0RF
Tue May  9 19:05:36 2023 [Z0][VMM][I]: RkFVTFQ+PE9TPjxBUkNIPjwhW0NEQVRBW3g4Nl82NF1dPjwvQVJDSD48VVVJ
Tue May  9 19:05:36 2023 [Z0][VMM][I]: RD48IVtDREFUQVszODJlNmE5Yi03NGYwLTRjM2EtOWZkZi03NWUyN2Q0MjQ4
Tue May  9 19:05:36 2023 [Z0][VMM][I]: ZWZdXT48L1VVSUQ+PC9PUz48VEVNUExBVEVfSUQ+PCFbQ0RBVEFbMTFdXT48
Tue May  9 19:05:36 2023 [Z0][VMM][I]: L1RFTVBMQVRFX0lEPjxUTV9NQURfU1lTVEVNPjwhW0NEQVRBW3NoYXJlZF1d
Tue May  9 19:05:36 2023 [Z0][VMM][I]: PjwvVE1fTUFEX1NZU1RFTT48Vk1JRD48IVtDREFUQVszMV1dPjwvVk1JRD48
Tue May  9 19:05:36 2023 [Z0][VMM][I]: L1RFTVBMQVRFPjxVU0VSX1RFTVBMQVRFPjxMT0dPPjwhW0NEQVRBW2ltYWdl
Tue May  9 19:05:36 2023 [Z0][VMM][I]: cy9sb2dvcy9saW51eC5wbmddXT48L0xPR08+PExYRF9TRUNVUklUWV9QUklW
Tue May  9 19:05:36 2023 [Z0][VMM][I]: SUxFR0VEPjwhW0NEQVRBW3RydWVdXT48L0xYRF9TRUNVUklUWV9QUklWSUxF
Tue May  9 19:05:36 2023 [Z0][VMM][I]: R0VEPjwvVVNFUl9URU1QTEFURT48SElTVE9SWV9SRUNPUkRTPjxISVNUT1JZ
Tue May  9 19:05:36 2023 [Z0][VMM][I]: PjxPSUQ+MzE8L09JRD48U0VRPjY8L1NFUT48SE9TVE5BTUU+T04tREVWLU4x
Tue May  9 19:05:36 2023 [Z0][VMM][I]: LWt2bTwvSE9TVE5BTUU+PEhJRD4wPC9ISUQ+PENJRD4wPC9DSUQ+PFNUSU1F
Tue May  9 19:05:36 2023 [Z0][VMM][I]: PjE2ODM2NzM1MzM8L1NUSU1FPjxFVElNRT4wPC9FVElNRT48Vk1fTUFEPjwh
Tue May  9 19:05:36 2023 [Z0][VMM][I]: W0NEQVRBW2t2bV1dPjwvVk1fTUFEPjxUTV9NQUQ+PCFbQ0RBVEFbc2hhcmVk
Tue May  9 19:05:36 2023 [Z0][VMM][I]: XV0+PC9UTV9NQUQ+PERTX0lEPjA8L0RTX0lEPjxQU1RJTUU+MDwvUFNUSU1F
Tue May  9 19:05:36 2023 [Z0][VMM][I]: PjxQRVRJTUU+MDwvUEVUSU1FPjxSU1RJTUU+MTY4MzY3MzUzMzwvUlNUSU1F
Tue May  9 19:05:36 2023 [Z0][VMM][I]: PjxSRVRJTUU+MDwvUkVUSU1FPjxFU1RJTUU+MDwvRVNUSU1FPjxFRVRJTUU+
Tue May  9 19:05:36 2023 [Z0][VMM][I]: MDwvRUVUSU1FPjxBQ1RJT04+MDwvQUNUSU9OPjxVSUQ+LTE8L1VJRD48R0lE
Tue May  9 19:05:36 2023 [Z0][VMM][I]: Pi0xPC9HSUQ+PFJFUVVFU1RfSUQ+LTE8L1JFUVVFU1RfSUQ+PC9ISVNUT1JZ
Tue May  9 19:05:36 2023 [Z0][VMM][I]: PjwvSElTVE9SWV9SRUNPUkRTPjxCQUNLVVBTPjxCQUNLVVBfQ09ORklHPjxC
Tue May  9 19:05:36 2023 [Z0][VMM][I]: QUNLVVBfVk9MQVRJTEU+PCFbQ0RBVEFbTk9dXT48L0JBQ0tVUF9WT0xBVElM
Tue May  9 19:05:36 2023 [Z0][VMM][I]: RT48RlNfRlJFRVpFPjwhW0NEQVRBW0FHRU5UXV0+PC9GU19GUkVFWkU+PElO
Tue May  9 19:05:36 2023 [Z0][VMM][I]: Q1JFTUVOVEFMX0JBQ0tVUF9JRD48IVtDREFUQVs3Nl1dPjwvSU5DUkVNRU5U
Tue May  9 19:05:36 2023 [Z0][VMM][I]: QUxfQkFDS1VQX0lEPjxLRUVQX0xBU1Q+PCFbQ0RBVEFbN11dPjwvS0VFUF9M
Tue May  9 19:05:36 2023 [Z0][VMM][I]: QVNUPjxMQVNUX0RBVEFTVE9SRV9JRD48IVtDREFUQVsxMDBdXT48L0xBU1Rf
Tue May  9 19:05:36 2023 [Z0][VMM][I]: REFUQVNUT1JFX0lEPjxMQVNUX0lOQ1JFTUVOVF9JRD48IVtDREFUQVswXV0+
Tue May  9 19:05:36 2023 [Z0][VMM][I]: PC9MQVNUX0lOQ1JFTUVOVF9JRD48TU9ERT48IVtDREFUQVtJTkNSRU1FTlRd
Tue May  9 19:05:36 2023 [Z0][VMM][I]: XT48L01PREU+PC9CQUNLVVBfQ09ORklHPjxCQUNLVVBfSURTPjxJRD43NTwv
Tue May  9 19:05:36 2023 [Z0][VMM][I]: SUQ+PElEPjc2PC9JRD48L0JBQ0tVUF9JRFM+PC9CQUNLVVBTPjwvVk0+
Tue May  9 19:05:36 2023 [Z0][VMM][I]: " > /var/lib/one//datastores/0/31/backup/vm.xml
Tue May  9 19:05:36 2023 [Z0][VMM][I]:
Tue May  9 19:05:37 2023 [Z0][VMM][I]: # --------------------------------------
Tue May  9 19:05:37 2023 [Z0][VMM][I]: # Create backup live
Tue May  9 19:05:37 2023 [Z0][VMM][I]: # --------------------------------------
Tue May  9 19:05:37 2023 [Z0][VMM][I]: /var/tmp/one/tm/lib/backup_qcow2.rb -l -d "0:" -x /var/lib/one//datastores/0/31/backup/vm.xml -p /var/lib/one//datastores/0/31
Tue May  9 19:05:37 2023 [Z0][VMM][I]:
Tue May  9 19:05:37 2023 [Z0][VMM][I]: Error: Error executing 'virsh --connect qemu:///system backup-begin --reuse-external --backupxml /var/lib/one//datastores/0/31/tmp/backup.xml --checkpointxml /var/lib/one//datastores/0/31/tmp/checkpoint.xml 382e6a9b-74f0-4c3a-9fdf-75e27d4248ef':
Tue May  9 19:05:37 2023 [Z0][VMM][I]:
Tue May  9 19:05:37 2023 [Z0][VMM][I]: error: internal error: unable to execute QEMU command 'transaction': Bitmap already exists: one-31-1
Tue May  9 19:05:37 2023 [Z0][VMM][I]: Error preparing disk files: Error executing 'virsh --connect qemu:///system backup-begin --reuse-external --backupxml /var/lib/one//datastores/0/31/tmp/backup.xml --checkpointxml /var/lib/one//datastores/0/31/tmp/checkpoint.xml 382e6a9b-74f0-4c3a-9fdf-75e27d4248ef':
Tue May  9 19:05:37 2023 [Z0][VMM][I]:
Tue May  9 19:05:37 2023 [Z0][VMM][I]: error: internal error: unable to execute QEMU command 'transaction': Bitmap already exists: one-31-1
Tue May  9 19:05:37 2023 [Z0][VMM][I]: + '[' -d /var/lib/one//datastores/0/31/tmp ']'
Tue May  9 19:05:37 2023 [Z0][VMM][I]: + '[' -d /var/lib/one//datastores/0/31/backup ']'
Tue May  9 19:05:37 2023 [Z0][VMM][I]: + mkdir -p /var/lib/one//datastores/0/31/tmp
Tue May  9 19:05:37 2023 [Z0][VMM][I]: + mkdir -p /var/lib/one//datastores/0/31/backup
Tue May  9 19:05:37 2023 [Z0][VMM][I]: + echo 'PFZNPjxJRD4zMTwvSUQ+PFVJRD4wPC9VSUQ+PEdJRD4wPC9HSUQ+PFVOQU1F
Tue May  9 19:05:37 2023 [Z0][VMM][I]: Pm9uZWFkbWluPC9VTkFNRT48R05BTUU+b25lYWRtaW48L0dOQU1FPjxOQU1F
Tue May  9 19:05:37 2023 [Z0][VMM][I]: PmFscGluZTwvTkFNRT48UEVSTUlTU0lPTlM+PE9XTkVSX1U+MTwvT1dORVJf
Tue May  9 19:05:37 2023 [Z0][VMM][I]: VT48T1dORVJfTT4xPC9PV05FUl9NPjxPV05FUl9BPjA8L09XTkVSX0E+PEdS
Tue May  9 19:05:37 2023 [Z0][VMM][I]: T1VQX1U+MDwvR1JPVVBfVT48R1JPVVBfTT4wPC9HUk9VUF9NPjxHUk9VUF9B
Tue May  9 19:05:37 2023 [Z0][VMM][I]: PjA8L0dST1VQX0E+PE9USEVSX1U+MDwvT1RIRVJfVT48T1RIRVJfTT4wPC9P
Tue May  9 19:05:37 2023 [Z0][VMM][I]: VEhFUl9NPjxPVEhFUl9BPjA8L09USEVSX0E+PC9QRVJNSVNTSU9OUz48TEFT
Tue May  9 19:05:37 2023 [Z0][VMM][I]: VF9QT0xMPjA8L0xBU1RfUE9MTD48U1RBVEU+MzwvU1RBVEU+PExDTV9TVEFU
Tue May  9 19:05:37 2023 [Z0][VMM][I]: RT42OTwvTENNX1NUQVRFPjxQUkVWX1NUQVRFPjM8L1BSRVZfU1RBVEU+PFBS
Tue May  9 19:05:37 2023 [Z0][VMM][I]: RVZfTENNX1NUQVRFPjY5PC9QUkVWX0xDTV9TVEFURT48UkVTQ0hFRD4wPC9S
Tue May  9 19:05:37 2023 [Z0][VMM][I]: RVNDSEVEPjxTVElNRT4xNjgzNjcxMDUzPC9TVElNRT48RVRJTUU+MDwvRVRJ
Tue May  9 19:05:37 2023 [Z0][VMM][I]: TUU+PERFUExPWV9JRD4zODJlNmE5Yi03NGYwLTRjM2EtOWZkZi03NWUyN2Q0
Tue May  9 19:05:37 2023 [Z0][VMM][I]: MjQ4ZWY8L0RFUExPWV9JRD48TU9OSVRPUklORy8+PFRFTVBMQVRFPjxBVVRP
Tue May  9 19:05:37 2023 [Z0][VMM][I]: TUFUSUNfRFNfUkVRVUlSRU1FTlRTPjwhW0NEQVRBWygiQ0xVU1RFUlMvSUQi
Tue May  9 19:05:37 2023 [Z0][VMM][I]: IEA+IDApXV0+PC9BVVRPTUFUSUNfRFNfUkVRVUlSRU1FTlRTPjxBVVRPTUFU
Tue May  9 19:05:37 2023 [Z0][VMM][I]: SUNfTklDX1JFUVVJUkVNRU5UUz48IVtDREFUQVsoIkNMVVNURVJTL0lEIiBA
Tue May  9 19:05:37 2023 [Z0][VMM][I]: PiAwKV1dPjwvQVVUT01BVElDX05JQ19SRVFVSVJFTUVOVFM+PEFVVE9NQVRJ
Tue May  9 19:05:37 2023 [Z0][VMM][I]: Q19SRVFVSVJFTUVOVFM+PCFbQ0RBVEFbKENMVVNURVJfSUQgPSAwKSAmICEo
Tue May  9 19:05:37 2023 [Z0][VMM][I]: UFVCTElDX0NMT1VEID0gWUVTKSAmICEoUElOX1BPTElDWSA9IFBJTk5FRCld
Tue May  9 19:05:37 2023 [Z0][VMM][I]: XT48L0FVVE9NQVRJQ19SRVFVSVJFTUVOVFM+PENPTlRFWFQ+PERJU0tfSUQ+
Tue May  9 19:05:37 2023 [Z0][VMM][I]: PCFbQ0RBVEFbMV1dPjwvRElTS19JRD48TkVUV09SSz48IVtDREFUQVtZRVNd
Tue May  9 19:05:37 2023 [Z0][VMM][I]: XT48L05FVFdPUks+PFNTSF9QVUJMSUNfS0VZPjwhW0NEQVRBW11dPjwvU1NI
Tue May  9 19:05:37 2023 [Z0][VMM][I]: X1BVQkxJQ19LRVk+PFRBUkdFVD48IVtDREFUQVtoZGFdXT48L1RBUkdFVD48
Tue May  9 19:05:37 2023 [Z0][VMM][I]: L0NPTlRFWFQ+PENQVT48IVtDREFUQVsxXV0+PC9DUFU+PERJU0s+PEFMTE9X
Tue May  9 19:05:37 2023 [Z0][VMM][I]: X09SUEhBTlM+PCFbQ0RBVEFbRk9STUFUXV0+PC9BTExPV19PUlBIQU5TPjxD
Tue May  9 19:05:37 2023 [Z0][VMM][I]: TE9ORT48IVtDREFUQVtZRVNdXT48L0NMT05FPjxDTE9ORV9UQVJHRVQ+PCFb
Tue May  9 19:05:37 2023 [Z0][VMM][I]: Q0RBVEFbU1lTVEVNXV0+PC9DTE9ORV9UQVJHRVQ+PENMVVNURVJfSUQ+PCFb
Tue May  9 19:05:37 2023 [Z0][VMM][I]: Q0RBVEFbMF1dPjwvQ0xVU1RFUl9JRD48REFUQVNUT1JFPjwhW0NEQVRBW2lt
Tue May  9 19:05:37 2023 [Z0][VMM][I]: YWdlc11dPjwvREFUQVNUT1JFPjxEQVRBU1RPUkVfSUQ+PCFbQ0RBVEFbMV1d
Tue May  9 19:05:37 2023 [Z0][VMM][I]: PjwvREFUQVNUT1JFX0lEPjxERVZfUFJFRklYPjwhW0NEQVRBW3ZkXV0+PC9E
Tue May  9 19:05:37 2023 [Z0][VMM][I]: RVZfUFJFRklYPjxESVNLX0lEPjwhW0NEQVRBWzBdXT48L0RJU0tfSUQ+PERJ
Tue May  9 19:05:37 2023 [Z0][VMM][I]: U0tfU05BUFNIT1RfVE9UQUxfU0laRT48IVtDREFUQVswXV0+PC9ESVNLX1NO
Tue May  9 19:05:37 2023 [Z0][VMM][I]: QVBTSE9UX1RPVEFMX1NJWkU+PERJU0tfVFlQRT48IVtDREFUQVtGSUxFXV0+
Tue May  9 19:05:37 2023 [Z0][VMM][I]: PC9ESVNLX1RZUEU+PERSSVZFUj48IVtDREFUQVtxY293Ml1dPjwvRFJJVkVS
Tue May  9 19:05:37 2023 [Z0][VMM][I]: PjxGT1JNQVQ+PCFbQ0RBVEFbcWNvdzJdXT48L0ZPUk1BVD48SU1BR0U+PCFb
Tue May  9 19:05:37 2023 [Z0][VMM][I]: Q0RBVEFbQWxwaW5lIExpbnV4IDMuMTddXT48L0lNQUdFPjxJTUFHRV9JRD48
Tue May  9 19:05:37 2023 [Z0][VMM][I]: IVtDREFUQVsyNl1dPjwvSU1BR0VfSUQ+PElNQUdFX1NUQVRFPjwhW0NEQVRB
Tue May  9 19:05:37 2023 [Z0][VMM][I]: WzJdXT48L0lNQUdFX1NUQVRFPjxMTl9UQVJHRVQ+PCFbQ0RBVEFbTk9ORV1d
Tue May  9 19:05:37 2023 [Z0][VMM][I]: PjwvTE5fVEFSR0VUPjxPUklHSU5BTF9TSVpFPjwhW0NEQVRBWzI1Nl1dPjwv
Tue May  9 19:05:37 2023 [Z0][VMM][I]: T1JJR0lOQUxfU0laRT48UkVBRE9OTFk+PCFbQ0RBVEFbTk9dXT48L1JFQURP
Tue May  9 19:05:37 2023 [Z0][VMM][I]: TkxZPjxTQVZFPjwhW0NEQVRBW05PXV0+PC9TQVZFPjxTSVpFPjwhW0NEQVRB
Tue May  9 19:05:37 2023 [Z0][VMM][I]: WzI1Nl1dPjwvU0laRT48U09VUkNFPjwhW0NEQVRBWy92YXIvbGliL29uZS8v
Tue May  9 19:05:37 2023 [Z0][VMM][I]: ZGF0YXN0b3Jlcy8xLzYzNjhiYzkyNWQxODhmMWRjMGU3MTE1ODQyNTBjYzUw
Tue May  9 19:05:37 2023 [Z0][VMM][I]: XV0+PC9TT1VSQ0U+PFRBUkdFVD48IVtDREFUQVt2ZGFdXT48L1RBUkdFVD48
Tue May  9 19:05:37 2023 [Z0][VMM][I]: VE1fTUFEPjwhW0NEQVRBW3NoYXJlZF1dPjwvVE1fTUFEPjxUWVBFPjwhW0NE
Tue May  9 19:05:37 2023 [Z0][VMM][I]: QVRBW0ZJTEVdXT48L1RZUEU+PC9ESVNLPjxHUkFQSElDUz48TElTVEVOPjwh
Tue May  9 19:05:37 2023 [Z0][VMM][I]: W0NEQVRBWzAuMC4wLjBdXT48L0xJU1RFTj48UE9SVD48IVtDREFUQVs1OTMx
Tue May  9 19:05:37 2023 [Z0][VMM][I]: XV0+PC9QT1JUPjxUWVBFPjwhW0NEQVRBW3ZuY11dPjwvVFlQRT48L0dSQVBI
Tue May  9 19:05:37 2023 [Z0][VMM][I]: SUNTPjxNRU1PUlk+PCFbQ0RBVEFbMTI4XV0+PC9NRU1PUlk+PE5JQ19ERUZB
Tue May  9 19:05:37 2023 [Z0][VMM][I]: VUxUPjxNT0RFTD48IVtDREFUQVt2aXJ0aW9dXT48L01PREVMPjwvTklDX0RF
Tue May  9 19:05:37 2023 [Z0][VMM][I]: RkFVTFQ+PE9TPjxBUkNIPjwhW0NEQVRBW3g4Nl82NF1dPjwvQVJDSD48VVVJ
Tue May  9 19:05:37 2023 [Z0][VMM][I]: RD48IVtDREFUQVszODJlNmE5Yi03NGYwLTRjM2EtOWZkZi03NWUyN2Q0MjQ4
Tue May  9 19:05:37 2023 [Z0][VMM][I]: ZWZdXT48L1VVSUQ+PC9PUz48VEVNUExBVEVfSUQ+PCFbQ0RBVEFbMTFdXT48
Tue May  9 19:05:37 2023 [Z0][VMM][I]: L1RFTVBMQVRFX0lEPjxUTV9NQURfU1lTVEVNPjwhW0NEQVRBW3NoYXJlZF1d
Tue May  9 19:05:37 2023 [Z0][VMM][I]: PjwvVE1fTUFEX1NZU1RFTT48Vk1JRD48IVtDREFUQVszMV1dPjwvVk1JRD48
Tue May  9 19:05:37 2023 [Z0][VMM][I]: L1RFTVBMQVRFPjxVU0VSX1RFTVBMQVRFPjxMT0dPPjwhW0NEQVRBW2ltYWdl
Tue May  9 19:05:37 2023 [Z0][VMM][I]: cy9sb2dvcy9saW51eC5wbmddXT48L0xPR08+PExYRF9TRUNVUklUWV9QUklW
Tue May  9 19:05:37 2023 [Z0][VMM][I]: SUxFR0VEPjwhW0NEQVRBW3RydWVdXT48L0xYRF9TRUNVUklUWV9QUklWSUxF
Tue May  9 19:05:37 2023 [Z0][VMM][I]: R0VEPjwvVVNFUl9URU1QTEFURT48SElTVE9SWV9SRUNPUkRTPjxISVNUT1JZ
Tue May  9 19:05:37 2023 [Z0][VMM][I]: PjxPSUQ+MzE8L09JRD48U0VRPjY8L1NFUT48SE9TVE5BTUU+T04tREVWLU4x
Tue May  9 19:05:37 2023 [Z0][VMM][I]: LWt2bTwvSE9TVE5BTUU+PEhJRD4wPC9ISUQ+PENJRD4wPC9DSUQ+PFNUSU1F
Tue May  9 19:05:37 2023 [Z0][VMM][I]: PjE2ODM2NzM1MzM8L1NUSU1FPjxFVElNRT4wPC9FVElNRT48Vk1fTUFEPjwh
Tue May  9 19:05:37 2023 [Z0][VMM][I]: W0NEQVRBW2t2bV1dPjwvVk1fTUFEPjxUTV9NQUQ+PCFbQ0RBVEFbc2hhcmVk
Tue May  9 19:05:37 2023 [Z0][VMM][I]: XV0+PC9UTV9NQUQ+PERTX0lEPjA8L0RTX0lEPjxQU1RJTUU+MDwvUFNUSU1F
Tue May  9 19:05:37 2023 [Z0][VMM][I]: PjxQRVRJTUU+MDwvUEVUSU1FPjxSU1RJTUU+MTY4MzY3MzUzMzwvUlNUSU1F
Tue May  9 19:05:37 2023 [Z0][VMM][I]: PjxSRVRJTUU+MDwvUkVUSU1FPjxFU1RJTUU+MDwvRVNUSU1FPjxFRVRJTUU+
Tue May  9 19:05:37 2023 [Z0][VMM][I]: MDwvRUVUSU1FPjxBQ1RJT04+MDwvQUNUSU9OPjxVSUQ+LTE8L1VJRD48R0lE
Tue May  9 19:05:37 2023 [Z0][VMM][I]: Pi0xPC9HSUQ+PFJFUVVFU1RfSUQ+LTE8L1JFUVVFU1RfSUQ+PC9ISVNUT1JZ
Tue May  9 19:05:37 2023 [Z0][VMM][I]: PjwvSElTVE9SWV9SRUNPUkRTPjxCQUNLVVBTPjxCQUNLVVBfQ09ORklHPjxC
Tue May  9 19:05:37 2023 [Z0][VMM][I]: QUNLVVBfVk9MQVRJTEU+PCFbQ0RBVEFbTk9dXT48L0JBQ0tVUF9WT0xBVElM
Tue May  9 19:05:37 2023 [Z0][VMM][I]: RT48RlNfRlJFRVpFPjwhW0NEQVRBW0FHRU5UXV0+PC9GU19GUkVFWkU+PElO
Tue May  9 19:05:37 2023 [Z0][VMM][I]: Q1JFTUVOVEFMX0JBQ0tVUF9JRD48IVtDREFUQVs3Nl1dPjwvSU5DUkVNRU5U
Tue May  9 19:05:37 2023 [Z0][VMM][I]: QUxfQkFDS1VQX0lEPjxLRUVQX0xBU1Q+PCFbQ0RBVEFbN11dPjwvS0VFUF9M
Tue May  9 19:05:37 2023 [Z0][VMM][I]: QVNUPjxMQVNUX0RBVEFTVE9SRV9JRD48IVtDREFUQVsxMDBdXT48L0xBU1Rf
Tue May  9 19:05:37 2023 [Z0][VMM][I]: REFUQVNUT1JFX0lEPjxMQVNUX0lOQ1JFTUVOVF9JRD48IVtDREFUQVswXV0+
Tue May  9 19:05:37 2023 [Z0][VMM][I]: PC9MQVNUX0lOQ1JFTUVOVF9JRD48TU9ERT48IVtDREFUQVtJTkNSRU1FTlRd
Tue May  9 19:05:37 2023 [Z0][VMM][I]: XT48L01PREU+PC9CQUNLVVBfQ09ORklHPjxCQUNLVVBfSURTPjxJRD43NTwv
Tue May  9 19:05:37 2023 [Z0][VMM][I]: SUQ+PElEPjc2PC9JRD48L0JBQ0tVUF9JRFM+PC9CQUNLVVBTPjwvVk0+
Tue May  9 19:05:37 2023 [Z0][VMM][I]: '
Tue May  9 19:05:37 2023 [Z0][VMM][I]: + /var/tmp/one/tm/lib/backup_qcow2.rb -l -d 0: -x /var/lib/one//datastores/0/31/backup/vm.xml -p /var/lib/one//datastores/0/31
Tue May  9 19:05:37 2023 [Z0][VMM][I]: Failed to execute transfer manager driver operation: prebackup_live.
Tue May  9 19:05:37 2023 [Z0][VMM][E]: BACKUP: ERROR: prebackup_live: Command failed: export LANG=C export LC_ALL=C set -ex -o pipefail  # ---------------------------------- # Prepare the tmp and backup folders # ---------------------------------- [ -d /var/lib/one//datastores/0/31/tmp ] && rm -rf /var/lib/one//datastores/0/31/tmp  [ -d /var/lib/one//datastores/0/31/backup ] && rm -rf /var/lib/one//datastores/0/31/backup  mkdir -p /var/lib/one//datastores/0/31/tmp  mkdir -p /var/lib/one//datastores/0/31/backup  echo "PFZNPjxJRD4zMTwvSUQ+PFVJRD4wPC9VSUQ+PEdJRD4wPC9HSUQ+PFVOQU1F Pm9uZWFkbWluPC9VTkFNRT48R05BTUU+b25lYWRtaW48L0dOQU1FPjxOQU1F PmFscGluZTwvTkFNRT48UEVSTUlTU0lPTlM+PE9XTkVSX1U+MTwvT1dORVJf VT48T1dORVJfTT4xPC9PV05FUl9NPjxPV05FUl9BPjA8L09XTkVSX0E+PEdS T1VQX1U+MDwvR1JPVVBfVT48R1JPVVBfTT4wPC9HUk9VUF9NPjxHUk9VUF9B PjA8L0dST1VQX0E+PE9USEVSX1U+MDwvT1RIRVJfVT48T1RIRVJfTT4wPC9P VEhFUl9NPjxPVEhFUl9BPjA8L09USEVSX0E+PC9QRVJNSVNTSU9OUz48TEFT VF9QT0xMPjA8L0xBU1RfUE9MTD48U1RBVEU+MzwvU1RBVEU+PExDTV9TVEFU RT42OTwvTENNX1NUQVRFPjxQUkVWX1NUQVRFPjM8L1BSRVZfU1RBVEU+PFBS RVZfTENNX1NUQVRFPjY5PC9QUkVWX0xDTV9TVEFURT48UkVTQ0hFRD4wPC9S RVNDSEVEPjxTVElNRT4xNjgzNjcxMDUzPC9TVElNRT48RVRJTUU+MDwvRVRJ TUU+PERFUExPWV9JRD4zODJlNmE5Yi03NGYwLTRjM2EtOWZkZi03NWUyN2Q0 MjQ4ZWY8L0RFUExPWV9JRD48TU9OSVRPUklORy8+PFRFTVBMQVRFPjxBVVRP TUFUSUNfRFNfUkVRVUlSRU1FTlRTPjwhW0NEQVRBWygiQ0xVU1RFUlMvSUQi IEA+IDApXV0+PC9BVVRPTUFUSUNfRFNfUkVRVUlSRU1FTlRTPjxBVVRPTUFU SUNfTklDX1JFUVVJUkVNRU5UUz48IVtDREFUQVsoIkNMVVNURVJTL0lEIiBA PiAwKV1dPjwvQVVUT01BVElDX05JQ19SRVFVSVJFTUVOVFM+PEFVVE9NQVRJ Q19SRVFVSVJFTUVOVFM+PCFbQ0RBVEFbKENMVVNURVJfSUQgPSAwKSAmICEo UFVCTElDX0NMT1VEID0gWUVTKSAmICEoUElOX1BPTElDWSA9IFBJTk5FRCld XT48L0FVVE9NQVRJQ19SRVFVSVJFTUVOVFM+PENPTlRFWFQ+PERJU0tfSUQ+ PCFbQ0RBVEFbMV1dPjwvRElTS19JRD48TkVUV09SSz48IVtDREFUQVtZRVNd XT48L05FVFdPUks+PFNTSF9QVUJMSUNfS0VZPjwhW0NEQVRBW11dPjwvU1NI X1BVQkxJQ19LRVk+PFRBUkdFVD48IVtDREFUQVtoZGFdXT48L1RBUkdFVD48 L0NPTlRFWFQ+PENQVT48IVtDREFUQVsxXV0+PC9DUFU+PERJU0s+PEFMTE9X X09SUEhBTlM+PCFbQ0RBVEFbRk9STUFUXV0+PC9BTExPV19PUlBIQU5TPjxD TE9ORT48IVtDREFUQVtZRVNdXT48L0NMT05FPjxDTE9ORV9UQVJHRVQ+PCFb Q0RBVEFbU1lTVEVNXV0+PC9DTE9ORV9UQVJHRVQ+PENMVVNURVJfSUQ+PCFb Q0RBVEFbMF1dPjwvQ0xVU1RFUl9JRD48REFUQVNUT1JFPjwhW0NEQVRBW2lt YWdlc11dPjwvREFUQVNUT1JFPjxEQVRBU1RPUkVfSUQ+PCFbQ0RBVEFbMV1d PjwvREFUQVNUT1JFX0lEPjxERVZfUFJFRklYPjwhW0NEQVRBW3ZkXV0+PC9E RVZfUFJFRklYPjxESVNLX0lEPjwhW0NEQVRBWzBdXT48L0RJU0tfSUQ+PERJ U0tfU05BUFNIT1RfVE9UQUxfU0laRT48IVtDREFUQVswXV0+PC9ESVNLX1NO QVBTSE9UX1RPVEFMX1NJWkU+PERJU0tfVFlQRT48IVtDREFUQVtGSUxFXV0+ PC9ESVNLX1RZUEU+PERSSVZFUj48IVtDREFUQVtxY293Ml1dPjwvRFJJVkVS PjxGT1JNQVQ+PCFbQ0RBVEFbcWNvdzJdXT48L0ZPUk1BVD48SU1BR0U+PCFb Q0RBVEFbQWxwaW5lIExpbnV4IDMuMTddXT48L0lNQUdFPjxJTUFHRV9JRD48 IVtDREFUQVsyNl1dPjwvSU1BR0VfSUQ+PElNQUdFX1NUQVRFPjwhW0NEQVRB WzJdXT48L0lNQUdFX1NUQVRFPjxMTl9UQVJHRVQ+PCFbQ0RBVEFbTk9ORV1d PjwvTE5fVEFSR0VUPjxPUklHSU5BTF9TSVpFPjwhW0NEQVRBWzI1Nl1dPjwv T1JJR0lOQUxfU0laRT48UkVBRE9OTFk+PCFbQ0RBVEFbTk9dXT48L1JFQURP TkxZPjxTQVZFPjwhW0NEQVRBW05PXV0+PC9TQVZFPjxTSVpFPjwhW0NEQVRB WzI1Nl1dPjwvU0laRT48U09VUkNFPjwhW0NEQVRBWy92YXIvbGliL29uZS8v ZGF0YXN0b3Jlcy8xLzYzNjhiYzkyNWQxODhmMWRjMGU3MTE1ODQyNTBjYzUw XV0+PC9TT1VSQ0U+PFRBUkdFVD48IVtDREFUQVt2ZGFdXT48L1RBUkdFVD48 VE1fTUFEPjwhW0NEQVRBW3NoYXJlZF1dPjwvVE1fTUFEPjxUWVBFPjwhW0NE QVRBW0ZJTEVdXT48L1RZUEU+PC9ESVNLPjxHUkFQSElDUz48TElTVEVOPjwh W0NEQVRBWzAuMC4wLjBdXT48L0xJU1RFTj48UE9SVD48IVtDREFUQVs1OTMx XV0+PC9QT1JUPjxUWVBFPjwhW0NEQVRBW3ZuY11dPjwvVFlQRT48L0dSQVBI SUNTPjxNRU1PUlk+PCFbQ0RBVEFbMTI4XV0+PC9NRU1PUlk+PE5JQ19ERUZB VUxUPjxNT0RFTD48IVtDREFUQVt2aXJ0aW9dXT48L01PREVMPjwvTklDX0RF RkFVTFQ+PE9TPjxBUkNIPjwhW0NEQVRBW3g4Nl82NF1dPjwvQVJDSD48VVVJ RD48IVtDREFUQVszODJlNmE5Yi03NGYwLTRjM2EtOWZkZi03NWUyN2Q0MjQ4 ZWZdXT48L1VVSUQ+PC9PUz48VEVNUExBVEVfSUQ+PCFbQ0RBVEFbMTFdXT48 L1RFTVBMQVRFX0lEPjxUTV9NQURfU1lTVEVNPjwhW0NEQVRBW3NoYXJlZF1d PjwvVE1fTUFEX1NZU1RFTT48Vk1JRD48IVtDREFUQVszMV1dPjwvVk1JRD48 L1RFTVBMQVRFPjxVU0VSX1RFTVBMQVRFPjxMT0dPPjwhW0NEQVRBW2ltYWdl cy9sb2dvcy9saW51eC5wbmddXT48L0xPR08+PExYRF9TRUNVUklUWV9QUklW SUxFR0VEPjwhW0NEQVRBW3RydWVdXT48L0xYRF9TRUNVUklUWV9QUklWSUxF R0VEPjwvVVNFUl9URU1QTEFURT48SElTVE9SWV9SRUNPUkRTPjxISVNUT1JZ PjxPSUQ+MzE8L09JRD48U0VRPjY8L1NFUT48SE9TVE5BTUU+T04tREVWLU4x LWt2bTwvSE9TVE5BTUU+PEhJRD4wPC9ISUQ+PENJRD4wPC9DSUQ+PFNUSU1F PjE2ODM2NzM1MzM8L1NUSU1FPjxFVElNRT4wPC9FVElNRT48Vk1fTUFEPjwh W0NEQVRBW2t2bV1dPjwvVk1fTUFEPjxUTV9NQUQ+PCFbQ0RBVEFbc2hhcmVk XV0+PC9UTV9NQUQ+PERTX0lEPjA8L0RTX0lEPjxQU1RJTUU+MDwvUFNUSU1F PjxQRVRJTUU+MDwvUEVUSU1FPjxSU1RJTUU+MTY4MzY3MzUzMzwvUlNUSU1F PjxSRVRJTUU+MDwvUkVUSU1FPjxFU1RJTUU+MDwvRVNUSU1FPjxFRVRJTUU+ MDwvRUVUSU1FPjxBQ1RJT04+MDwvQUNUSU9OPjxVSUQ+LTE8L1VJRD48R0lE Pi0xPC9HSUQ+PFJFUVVFU1RfSUQ+LTE8L1JFUVVFU1RfSUQ+PC9ISVNUT1JZ PjwvSElTVE9SWV9SRUNPUkRTPjxCQUNLVVBTPjxCQUNLVVBfQ09ORklHPjxC QUNLVVBfVk9MQVRJTEU+PCFbQ0RBVEFbTk9dXT48L0JBQ0tVUF9WT0xBVElM RT48RlNfRlJFRVpFPjwhW0NEQVRBW0FHRU5UXV0+PC9GU19GUkVFWkU+PElO Q1JFTUVOVEFMX0JBQ0tVUF9JRD48IVtDREFUQVs3Nl1dPjwvSU5DUkVNRU5U QUxfQkFDS1VQX0lEPjxLRUVQX0xBU1Q+PCFbQ0RBVEFbN11dPjwvS0VFUF9M QVNUPjxMQVNUX0RBVEFTVE9SRV9JRD48IVtDREFUQVsxMDBdXT48L0xBU1Rf REFUQVNUT1JFX0lEPjxMQVNUX0lOQ1JFTUVOVF9JRD48IVtDREFUQVswXV0+ PC9MQVNUX0lOQ1JFTUVOVF9JRD48TU9ERT48IVtDREFUQVtJTkNSRU1FTlRd XT48L01PREU+PC9CQUNLVVBfQ09ORklHPjxCQUNLVVBfSURTPjxJRD43NTwv SUQ+PElEPjc2PC9JRD48L0JBQ0tVUF9JRFM+PC9CQUNLVVBTPjwvVk0+ " > /var/lib/one//datastores/0/31/backup/vm.xml  # -------------------------------------- # Create backup live # -------------------------------------- /var/tmp/one/tm/lib/backup_qcow2.rb -l -d "0:" -x /var/lib/one//datastores/0/31/backup/vm.xml -p /var/lib/one//datastores/0/31  Error: Error executing 'virsh --connect qemu:///system backup-begin --reuse-external --backupxml /var/lib/one//datastores/0/31/tmp/backup.xml --checkpointxml /var/lib/one//datastores/0/31/tmp/checkpoint.xml 382e6a9b-74f0-4c3a-9fdf-75e27d4248ef':   error: internal error: unable to execute QEMU command 'transaction': Bitmap already exists: one-31-1 Error preparing disk files: Error executing 'virsh --connect qemu:///system backup-begin --reuse-external --backupxml /var/lib/one//datastores/0/31/tmp/backup.xml --checkpointxml /var/lib/one//datastores/0/31/tmp/checkpoint.xml 382e6a9b-74f0-4c3a-9fdf-75e27d4248ef':   error: internal error: unable to execute QEMU command 'transaction': Bitmap already exists: one-31-1  + '[' -d /var/lib/one//datastores/0/31/tmp ']' + '[' -d /var/lib/one//datastores/0/31/backup ']' + mkdir -p /var/lib/one//datastores/0/31/tmp + mkdir -p /var/lib/one//datastores/0/31/backup + echo 'PFZNPjxJRD4zMTwvSUQ+PFVJRD4wPC9VSUQ+PEdJRD4wPC9HSUQ+PFVOQU1F Pm9uZWFkbWluPC9VTkFNRT48R05BTUU+b25lYWRtaW48L0dOQU1FPjxOQU1F PmFscGluZTwvTkFNRT48UEVSTUlTU0lPTlM+PE9XTkVSX1U+MTwvT1dORVJf VT48T1dORVJfTT4xPC9PV05FUl9NPjxPV05FUl9BPjA8L09XTkVSX0E+PEdS T1VQX1U+MDwvR1JPVVBfVT48R1JPVVBfTT4wPC9HUk9VUF9NPjxHUk9VUF9B PjA8L0dST1VQX0E+PE9USEVSX1U+MDwvT1RIRVJfVT48T1RIRVJfTT4wPC9P VEhFUl9NPjxPVEhFUl9BPjA8L09USEVSX0E+PC9QRVJNSVNTSU9OUz48TEFT VF9QT0xMPjA8L0xBU1RfUE9MTD48U1RBVEU+MzwvU1RBVEU+PExDTV9TVEFU RT42OTwvTENNX1NUQVRFPjxQUkVWX1NUQVRFPjM8L1BSRVZfU1RBVEU+PFBS RVZfTENNX1NUQVRFPjY5PC9QUkVWX0xDTV9TVEFURT48UkVTQ0hFRD4wPC9S RVNDSEVEPjxTVElNRT4xNjgzNjcxMDUzPC9TVElNRT48RVRJTUU+MDwvRVRJ TUU+PERFUExPWV9JRD4zODJlNmE5Yi03NGYwLTRjM2EtOWZkZi03NWUyN2Q0 MjQ4ZWY8L0RFUExPWV9JRD48TU9OSVRPUklORy8+PFRFTVBMQVRFPjxBVVRP TUFUSUNfRFNfUkVRVUlSRU1FTlRTPjwhW0NEQVRBWygiQ0xVU1RFUlMvSUQi IEA+IDApXV0+PC9BVVRPTUFUSUNfRFNfUkVRVUlSRU1FTlRTPjxBVVRPTUFU SUNfTklDX1JFUVVJUkVNRU5UUz48IVtDREFUQVsoIkNMVVNURVJTL0lEIiBA PiAwKV1dPjwvQVVUT01BVElDX05JQ19SRVFVSVJFTUVOVFM+PEFVVE9NQVRJ Q19SRVFVSVJFTUVOVFM+PCFbQ0RBVEFbKENMVVNURVJfSUQgPSAwKSAmICEo UFVCTElDX0NMT1VEID0gWUVTKSAmICEoUElOX1BPTElDWSA9IFBJTk5FRCld XT48L0FVVE9NQVRJQ19SRVFVSVJFTUVOVFM+PENPTlRFWFQ+PERJU0tfSUQ+ PCFbQ0RBVEFbMV1dPjwvRElTS19JRD48TkVUV09SSz48IVtDREFUQVtZRVNd XT48L05FVFdPUks+PFNTSF9QVUJMSUNfS0VZPjwhW0NEQVRBW11dPjwvU1NI X1BVQkxJQ19LRVk+PFRBUkdFVD48IVtDREFUQVtoZGFdXT48L1RBUkdFVD48 L0NPTlRFWFQ+PENQVT48IVtDREFUQVsxXV0+PC9DUFU+PERJU0s+PEFMTE9X X09SUEhBTlM+PCFbQ0RBVEFbRk9STUFUXV0+PC9BTExPV19PUlBIQU5TPjxD TE9ORT48IVtDREFUQVtZRVNdXT48L0NMT05FPjxDTE9ORV9UQVJHRVQ+PCFb Q0RBVEFbU1lTVEVNXV0+PC9DTE9ORV9UQVJHRVQ+PENMVVNURVJfSUQ+PCFb Q0RBVEFbMF1dPjwvQ0xVU1RFUl9JRD48REFUQVNUT1JFPjwhW0NEQVRBW2lt YWdlc11dPjwvREFUQVNUT1JFPjxEQVRBU1RPUkVfSUQ+PCFbQ0RBVEFbMV1d PjwvREFUQVNUT1JFX0lEPjxERVZfUFJFRklYPjwhW0NEQVRBW3ZkXV0+PC9E RVZfUFJFRklYPjxESVNLX0lEPjwhW0NEQVRBWzBdXT48L0RJU0tfSUQ+PERJ U0tfU05BUFNIT1RfVE9UQUxfU0laRT48IVtDREFUQVswXV0+PC9ESVNLX1NO QVBTSE9UX1RPVEFMX1NJWkU+PERJU0tfVFlQRT48IVtDREFUQVtGSUxFXV0+ PC9ESVNLX1RZUEU+PERSSVZFUj48IVtDREFUQVtxY293Ml1dPjwvRFJJVkVS PjxGT1JNQVQ+PCFbQ0RBVEFbcWNvdzJdXT48L0ZPUk1BVD48SU1BR0U+PCFb Q0RBVEFbQWxwaW5lIExpbnV4IDMuMTddXT48L0lNQUdFPjxJTUFHRV9JRD48 IVtDREFUQVsyNl1dPjwvSU1BR0VfSUQ+PElNQUdFX1NUQVRFPjwhW0NEQVRB WzJdXT48L0lNQUdFX1NUQVRFPjxMTl9UQVJHRVQ+PCFbQ0RBVEFbTk9ORV1d PjwvTE5fVEFSR0VUPjxPUklHSU5BTF9TSVpFPjwhW0NEQVRBWzI1Nl1dPjwv T1JJR0lOQUxfU0laRT48UkVBRE9OTFk+PCFbQ0RBVEFbTk9dXT48L1JFQURP TkxZPjxTQVZFPjwhW0NEQVRBW05PXV0+PC9TQVZFPjxTSVpFPjwhW0NEQVRB WzI1Nl1dPjwvU0laRT48U09VUkNFPjwhW0NEQVRBWy92YXIvbGliL29uZS8v ZGF0YXN0b3Jlcy8xLzYzNjhiYzkyNWQxODhmMWRjMGU3MTE1ODQyNTBjYzUw XV0+PC9TT1VSQ0U+PFRBUkdFVD48IVtDREFUQVt2ZGFdXT48L1RBUkdFVD48 VE1fTUFEPjwhW0NEQVRBW3NoYXJlZF1dPjwvVE1fTUFEPjxUWVBFPjwhW0NE QVRBW0ZJTEVdXT48L1RZUEU+PC9ESVNLPjxHUkFQSElDUz48TElTVEVOPjwh W0NEQVRBWzAuMC4wLjBdXT48L0xJU1RFTj48UE9SVD48IVtDREFUQVs1OTMx XV0+PC9QT1JUPjxUWVBFPjwhW0NEQVRBW3ZuY11dPjwvVFlQRT48L0dSQVBI SUNTPjxNRU1PUlk+PCFbQ0RBVEFbMTI4XV0+PC9NRU1PUlk+PE5JQ19ERUZB VUxUPjxNT0RFTD48IVtDREFUQVt2aXJ0aW9dXT48L01PREVMPjwvTklDX0RF RkFVTFQ+PE9TPjxBUkNIPjwhW0NEQVRBW3g4Nl82NF1dPjwvQVJDSD48VVVJ RD48IVtDREFUQVszODJlNmE5Yi03NGYwLTRjM2EtOWZkZi03NWUyN2Q0MjQ4 ZWZdXT48L1VVSUQ+PC9PUz48VEVNUExBVEVfSUQ+PCFbQ0RBVEFbMTFdXT48 L1RFTVBMQVRFX0lEPjxUTV9NQURfU1lTVEVNPjwhW0NEQVRBW3NoYXJlZF1d PjwvVE1fTUFEX1NZU1RFTT48Vk1JRD48IVtDREFUQVszMV1dPjwvVk1JRD48 L1RFTVBMQVRFPjxVU0VSX1RFTVBMQVRFPjxMT0dPPjwhW0NEQVRBW2ltYWdl cy9sb2dvcy9saW51eC5wbmddXT48L0xPR08+PExYRF9TRUNVUklUWV9QUklW SUxFR0VEPjwhW0NEQVRBW3RydWVdXT48L0xYRF9TRUNVUklUWV9QUklWSUxF R0VEPjwvVVNFUl9URU1QTEFURT48SElTVE9SWV9SRUNPUkRTPjxISVNUT1JZ PjxPSUQ+MzE8L09JRD48U0VRPjY8L1NFUT48SE9TVE5BTUU+T04tREVWLU4x LWt2bTwvSE9TVE5BTUU+PEhJRD4wPC9ISUQ+PENJRD4wPC9DSUQ+PFNUSU1F PjE2ODM2NzM1MzM8L1NUSU1FPjxFVElNRT4wPC9FVElNRT48Vk1fTUFEPjwh W0NEQVRBW2t2bV1dPjwvVk1fTUFEPjxUTV9NQUQ+PCFbQ0RBVEFbc2hhcmVk XV0+PC9UTV9NQUQ+PERTX0lEPjA8L0RTX0lEPjxQU1RJTUU+MDwvUFNUSU1F PjxQRVRJTUU+MDwvUEVUSU1FPjxSU1RJTUU+MTY4MzY3MzUzMzwvUlNUSU1F PjxSRVRJTUU+MDwvUkVUSU1FPjxFU1RJTUU+MDwvRVNUSU1FPjxFRVRJTUU+ MDwvRUVUSU1FPjxBQ1RJT04+MDwvQUNUSU9OPjxVSUQ+LTE8L1VJRD48R0lE Pi0xPC9HSUQ+PFJFUVVFU1RfSUQ+LTE8L1JFUVVFU1RfSUQ+PC9ISVNUT1JZ PjwvSElTVE9SWV9SRUNPUkRTPjxCQUNLVVBTPjxCQUNLVVBfQ09ORklHPjxC QUNLVVBfVk9MQVRJTEU+PCFbQ0RBVEFbTk9dXT48L0JBQ0tVUF9WT0xBVElM RT48RlNfRlJFRVpFPjwhW0NEQVRBW0FHRU5UXV0+PC9GU19GUkVFWkU+PElO Q1JFTUVOVEFMX0JBQ0tVUF9JRD48IVtDREFUQVs3Nl1dPjwvSU5DUkVNRU5U QUxfQkFDS1VQX0lEPjxLRUVQX0xBU1Q+PCFbQ0RBVEFbN11dPjwvS0VFUF9M QVNUPjxMQVNUX0RBVEFTVE9SRV9JRD48IVtDREFUQVsxMDBdXT48L0xBU1Rf REFUQVNUT1JFX0lEPjxMQVNUX0lOQ1JFTUVOVF9JRD48IVtDREFUQVswXV0+ PC9MQVNUX0lOQ1JFTUVOVF9JRD48TU9ERT48IVtDREFUQVtJTkNSRU1FTlRd XT48L01PREU+PC9CQUNLVVBfQ09ORklHPjxCQUNLVVBfSURTPjxJRD43NTwv SUQ+PElEPjc2PC9JRD48L0JBQ0tVUF9JRFM+PC9CQUNLVVBTPjwvVk0+ ' + /var/tmp/one/tm/lib/backup_qcow2.rb -l -d 0: -x /var/lib/one//datastores/0/31/backup/vm.xml -p /var/lib/one//datastores/0/31
Tue May  9 19:05:37 2023 [Z0][VM][I]: New LCM state is RUNNING

From previous output, you will see that it failed due to Bitmap already exists: one-31-1. The main concern here is why are still old dirt-bitmaps after a reset backup.

Lets check in another way, this time getting the output in json format:

qemu-img info --output json --force-share /var/lib/one/datastores/0/31/disk.0

{
    "backing-filename-format": "qcow2",
    "virtual-size": 268435456,
    "filename": "/var/lib/one/datastores/0/31/disk.0",
    "cluster-size": 65536,
    "format": "qcow2",
    "actual-size": 3215360,
    "format-specific": {
        "type": "qcow2",
        "data": {
            "compat": "1.1",
            "compression-type": "zlib",
            "lazy-refcounts": false,
            "bitmaps": [
                {
                    "flags": [
                        "in-use",
                        "auto"
                    ],
                    "name": "one-31-2",
                    "granularity": 65536
                },
                {
                    "flags": [
                        "in-use",
                        "auto"
                    ],
                    "name": "one-31-1",
                    "granularity": 65536
                }
            ],
            "refcount-bits": 16,
            "corrupt": false,
            "extended-l2": false
        }
    },
    "full-backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "dirty-flag": false
}

Let's keep digging into the info:

virsh qemu-monitor-command one-31 --pretty  '{"execute": "query-block", "arguments": {}}'

{
  "return": [
    {
      "io-status": "ok",
      "device": "",
      "locked": false,
      "removable": false,
      "inserted": {
        "iops_rd": 0,
        "detect_zeroes": "off",
        "image": {
          "backing-image": {
            "virtual-size": 268435456,
            "filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
            "cluster-size": 65536,
            "format": "qcow2",
            "actual-size": 66424832,
            "format-specific": {
              "type": "qcow2",
              "data": {
                "compat": "1.1",
                "compression-type": "zlib",
                "lazy-refcounts": false,
                "refcount-bits": 16,
                "corrupt": false,
                "extended-l2": false
              }
            },
            "dirty-flag": false
          },
          "backing-filename-format": "qcow2",
          "virtual-size": 268435456,
          "filename": "/var/lib/one//datastores/0/31/disk.0",
          "cluster-size": 65536,
          "format": "qcow2",
          "actual-size": 3215360,
          "format-specific": {
            "type": "qcow2",
            "data": {
              "compat": "1.1",
              "compression-type": "zlib",
              "lazy-refcounts": false,
              "bitmaps": [
                {
                  "flags": [
                    "in-use",
                    "auto"
                  ],
                  "name": "one-31-2",
                  "granularity": 65536
                },
                {
                  "flags": [
                    "in-use",
                    "auto"
                  ],
                  "name": "one-31-1",
                  "granularity": 65536
                }
              ],
              "refcount-bits": 16,
              "corrupt": false,
              "extended-l2": false
            }
          },
          "full-backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
          "backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
          "dirty-flag": false
        },
        "iops_wr": 0,
        "ro": false,
        "node-name": "libvirt-2-format",
        "backing_file_depth": 1,
        "drv": "qcow2",
        "iops": 0,
        "bps_wr": 0,
        "write_threshold": 0,
        "backing_file": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
        "dirty-bitmaps": [
          {
            "name": "one-31-0",
            "recording": true,
            "persistent": true,
            "busy": false,
            "granularity": 65536,
            "count": 655360
          },
          {
            "name": "one-31-1",
            "recording": true,
            "persistent": true,
            "busy": false,
            "granularity": 65536,
            "count": 1638400
          },
          {
            "name": "one-31-2",
            "recording": true,
            "persistent": true,
            "busy": false,
            "granularity": 65536,
            "count": 1638400
          }
        ],
        "encrypted": false,
        "bps": 0,
        "bps_rd": 0,
        "cache": {
          "no-flush": false,
          "direct": true,
          "writeback": true
        },
        "file": "/var/lib/one//datastores/0/31/disk.0"
      },
      "qdev": "/machine/peripheral/virtio-disk0/virtio-backend",
      "type": "unknown"
    },
    {
      "io-status": "ok",
      "device": "",
      "locked": true,
      "removable": true,
      "inserted": {
        "iops_rd": 0,
        "detect_zeroes": "off",
        "image": {
          "virtual-size": 372736,
          "filename": "/var/lib/one//datastores/0/31/disk.1",
          "format": "raw",
          "actual-size": 372736,
          "dirty-flag": false
        },
        "iops_wr": 0,
        "ro": true,
        "node-name": "libvirt-1-format",
        "backing_file_depth": 0,
        "drv": "raw",
        "iops": 0,
        "bps_wr": 0,
        "write_threshold": 0,
        "encrypted": false,
        "bps": 0,
        "bps_rd": 0,
        "cache": {
          "no-flush": false,
          "direct": false,
          "writeback": true
        },
        "file": "/var/lib/one//datastores/0/31/disk.1"
      },
      "qdev": "ide0-0-0",
      "tray_open": false,
      "type": "unknown"
    }
  ],
  "id": "libvirt-12028"
}

Expected behavior Creation of bitmaps for every backup and deleted the old ones after a reset backup. No bitmaps collision.

Details

Additional context

Add any other context about the problem here.

Progress Status

rsmontero commented 1 year ago

As you have seen bitmaps are synchronized into the qcow2 when the VM is poweroff. While it is running they are kept in libvirt. The problem in this situation is that when the VM is resumed the bitmaps are not redefined by libvirt and thus they are not cleaned in the cleanup phase of the reset opereation.

This commits solve the issue 7471016edbf5739e64b849b4d8d50bc

You can easily take the new file: https://github.com/OpenNebula/one/blob/master/src/tm_mad/lib/backup_qcow2.rb

Franco-Sparrow commented 1 year ago

Thanks Sir, for your quick response here. We will test from our end and let you know the results.

Cheers

Franco-Sparrow commented 1 year ago

@rsmontero thanks for your commits. Its looks like the issue is now fixed. I made a deep round of tests backups over VM 36. I will leave the tests here, if someone else wants to try it by itself (it should be needed for a every new ON release from now on, in order to test the backups)

INDEX

1. Debugging backup and restore functionalities

Add the following lines in the nodes and orchestrator:

cat << EOF > vars
VM_ID=36
DS_ID=100
EOF
source vars

Edit the /var/lib/one/remotes/tm/lib/backup_qcow2.rb:

sudo -u oneadmin nano /var/lib/one/remotes/tm/lib/backup_qcow2.rb

Edit the following line, for backup_qcow2.rb logs:

LOG_FILE     = '/var/log/one/backup_qcow2.log'

Sync the hosts:

sudo -u oneadmin onehost sync -f

* Adding ON-DEV-N1-kvm to upgrade
[========================================] 1/1 ON-DEV-N1-kvm
All hosts updated successfully.

1.1. ACTION#1: Backup#1 (Full live backup)

Create file test-img82-disk.0.0 inside the VM 36:

touch test-img82-disk.0.0

Executing backup:

source vars
onevm backup -d $DS_ID $VM_ID

File /var/log/one/36.log:

Wed May 10 12:09:37 2023 [Z0][VM][I]: New LCM state is BACKUP
Wed May 10 12:09:43 2023 [Z0][VMM][I]: Successfully execute transfer manager driver operation: prebackup_live.
Wed May 10 12:09:48 2023 [Z0][VMM][I]: Successfully execute  operation: backup.
Wed May 10 12:09:50 2023 [Z0][VMM][I]: Successfully execute transfer manager driver operation: postbackup_live.
Wed May 10 12:09:50 2023 [Z0][VMM][I]: VM backup successfully created.
Wed May 10 12:09:50 2023 [Z0][VM][I]: New LCM state is RUNNING

File /var/log/one/backup_qcow2.log:

12:09:39.759 [CMD]: virsh --connect qemu:///system checkpoint-list --name 0f297c4b-001f-434d-9123-739342da23fb
12:09:39.777 [CMD]: DONE
12:09:39.777 [CMD]: qemu-img info --output json --force-share /var/lib/datastores/local_mount/0/36/disk.0.snap/0
12:09:39.785 [CMD]: DONE
12:09:39.788 [CMD]: virsh --connect qemu:///system domfsfreeze 0f297c4b-001f-434d-9123-739342da23fb
12:09:39.836 [CMD]: DONE
12:09:39.836 [CMD]: virsh --connect qemu:///system snapshot-create-as --name one-36-backup --disk-only --atomic --diskspec vda,file=/var/lib/one//datastores/0/36/tmp/overlay_0.qcow2 0f297c4b-001f-434d-9123-739342da23fb
12:09:40.315 [CMD]: DONE
12:09:40.315 [CMD]: virsh --connect qemu:///system checkpoint-create --xmlfile /var/lib/one//datastores/0/36/tmp/checkpoint.xml 0f297c4b-001f-434d-9123-739342da23fb
12:09:40.435 [CMD]: DONE
12:09:40.435 [CMD]: virsh --connect qemu:///system domfsthaw 0f297c4b-001f-434d-9123-739342da23fb
12:09:40.565 [CMD]: DONE
12:09:40.583 [CMD]: qemu-img convert -m 4 -O qcow2 -U /var/lib/datastores/local_mount/0/36/disk.0.snap/0 /var/lib/one//datastores/0/36/backup/disk.0.0
12:09:43.249 [CMD]: DONE
12:09:43.249 [BCK]: Full backup done in 3.464426827s
12:09:49.365 [CMD]: virsh --connect qemu:///system blockcommit --base /var/lib/one//datastores/0/36/disk.0 --active --pivot --keep-relative 0f297c4b-001f-434d-9123-739342da23fb vda
12:09:50.564 [CMD]: DONE
12:09:50.564 [CMD]: virsh --connect qemu:///system snapshot-delete --snapshotname one-36-backup --metadata 0f297c4b-001f-434d-9123-739342da23fb
12:09:50.582 [CMD]: DONE

Checking backup image for current chain of increments:

oneimage show 82

# [...]
BACKUP INCREMENTS
 ID PID T SIZE                DATE SOURCE
  0  -1 F 118M      05/10 12:09:50 33a8ff

Checking the current checkpoints:

source vars
virsh checkpoint-list one-$VM_ID

 Name       Creation Time
---------------------------------------
 one-36-0   2023-05-10 12:09:40 -0400

Checking bitmaps:

source vars
qemu-img info --output json --force-share /var/lib/one/datastores/0/$VM_ID/disk.0

{
    "backing-filename-format": "qcow2",
    "virtual-size": 268435456,
    "filename": "/var/lib/one/datastores/0/36/disk.0",
    "cluster-size": 65536,
    "format": "qcow2",
    "actual-size": 3346432,
    "format-specific": {
        "type": "qcow2",
        "data": {
            "compat": "1.1",
            "compression-type": "zlib",
            "lazy-refcounts": false,
            "refcount-bits": 16,
            "corrupt": false,
            "extended-l2": false
        }
    },
    "full-backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "dirty-flag": false
}

Checking bitmaps and dirty-bitmaps through virsh qemu-monitor-command:

source vars
virsh qemu-monitor-command one-$VM_ID --pretty  '{"execute": "query-block", "arguments": {}}'

{
  "return": [
    {
      "io-status": "ok",
      "device": "",
      "locked": false,
      "removable": false,
      "inserted": {
        "iops_rd": 0,
        "detect_zeroes": "off",
        "image": {
          "backing-image": {
            "virtual-size": 268435456,
            "filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
            "cluster-size": 65536,
            "format": "qcow2",
            "actual-size": 66424832,
            "format-specific": {
              "type": "qcow2",
              "data": {
                "compat": "1.1",
                "compression-type": "zlib",
                "lazy-refcounts": false,
                "refcount-bits": 16,
                "corrupt": false,
                "extended-l2": false
              }
            },
            "dirty-flag": false
          },
          "backing-filename-format": "qcow2",
          "virtual-size": 268435456,
          "filename": "/var/lib/one//datastores/0/36/disk.0",
          "cluster-size": 65536,
          "format": "qcow2",
          "actual-size": 3346432,
          "format-specific": {
            "type": "qcow2",
            "data": {
              "compat": "1.1",
              "compression-type": "zlib",
              "lazy-refcounts": false,
              "refcount-bits": 16,
              "corrupt": false,
              "extended-l2": false
            }
          },
          "full-backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
          "backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
          "dirty-flag": false
        },
        "iops_wr": 0,
        "ro": false,
        "node-name": "libvirt-2-format",
        "backing_file_depth": 1,
        "drv": "qcow2",
        "iops": 0,
        "bps_wr": 0,
        "write_threshold": 0,
        "backing_file": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
        "dirty-bitmaps": [
          {
            "name": "one-36-0",
            "recording": true,
            "persistent": true,
            "busy": false,
            "granularity": 65536,
            "count": 393216
          }
        ],
        "encrypted": false,
        "bps": 0,
        "bps_rd": 0,
        "cache": {
          "no-flush": false,
          "direct": true,
          "writeback": true
        },
        "file": "/var/lib/one//datastores/0/36/disk.0"
      },
      "qdev": "/machine/peripheral/virtio-disk0/virtio-backend",
      "type": "unknown"
    },
    {
      "io-status": "ok",
      "device": "",
      "locked": true,
      "removable": true,
      "inserted": {
        "iops_rd": 0,
        "detect_zeroes": "off",
        "image": {
          "virtual-size": 372736,
          "filename": "/var/lib/one//datastores/0/36/disk.1",
          "format": "raw",
          "actual-size": 372736,
          "dirty-flag": false
        },
        "iops_wr": 0,
        "ro": true,
        "node-name": "libvirt-5-format",
        "backing_file_depth": 0,
        "drv": "raw",
        "iops": 0,
        "bps_wr": 0,
        "write_threshold": 0,
        "encrypted": false,
        "bps": 0,
        "bps_rd": 0,
        "cache": {
          "no-flush": false,
          "direct": false,
          "writeback": true
        },
        "file": "/var/lib/one//datastores/0/36/disk.1"
      },
      "qdev": "ide0-0-0",
      "tray_open": false,
      "type": "unknown"
    }
  ],
  "id": "libvirt-1583"
}

1.2 ACTION#2: Backup#2 (Incremental live backup)

Create file test-img82-disk.0.1 inside the VM 36:

touch test-img82-disk.0.1

Executing backup:

source vars
onevm backup -d $DS_ID $VM_ID

File /var/log/one/36.log:

Wed May 10 12:17:08 2023 [Z0][VM][I]: New LCM state is BACKUP
Wed May 10 12:17:10 2023 [Z0][VMM][I]: Successfully execute transfer manager driver operation: prebackup_live.
Wed May 10 12:17:13 2023 [Z0][VMM][I]: Successfully execute  operation: backup.
Wed May 10 12:17:14 2023 [Z0][VMM][I]: Successfully execute transfer manager driver operation: postbackup_live.
Wed May 10 12:17:14 2023 [Z0][VMM][I]: VM backup successfully created.
Wed May 10 12:17:14 2023 [Z0][VM][I]: New LCM state is RUNNING

File /var/log/one/backup_qcow2.log:

12:17:09.936 [CMD]: virsh --connect qemu:///system checkpoint-list --name 0f297c4b-001f-434d-9123-739342da23fb
12:17:09.954 [CMD]: DONE
12:17:09.955 [CMD]: virsh --connect qemu:///system domfsfreeze 0f297c4b-001f-434d-9123-739342da23fb
12:17:10.011 [CMD]: DONE
12:17:10.015 [CMD]: qemu-img create -f qcow2 /var/lib/one//datastores/0/36/tmp/scracth.0.qcow2 256M
12:17:10.085 [CMD]: DONE
12:17:10.086 [CMD]: virsh --connect qemu:///system backup-begin --reuse-external --backupxml /var/lib/one//datastores/0/36/tmp/backup.xml --checkpointxml /var/lib/one//datastores/0/36/tmp/checkpoint.xml 0f297c4b-001f-434d-9123-739342da23fb
12:17:10.523 [CMD]: DONE
12:17:10.523 [CMD]: virsh --connect qemu:///system domfsthaw 0f297c4b-001f-434d-9123-739342da23fb
12:17:10.600 [CMD]: DONE
12:17:10.603 [CMD]: nbdinfo --json --map=qemu:dirty-bitmap:backup-vda nbd+unix:///vda?socket=/var/lib/one//datastores/0/36/backup.socket
12:17:10.620 [CMD]: DONE
12:17:10.621 [CMD]: qemu-img create -f qcow2 -F raw -b nbd+unix:///vda?socket=/var/lib/one//datastores/0/36/backup.socket /var/lib/one//datastores/0/36/backup/disk.0.1
12:17:10.692 [CMD]: DONE
12:17:10.693 [CMD]: qemu-io
12:17:10.922 [CMD]: DONE
12:17:10.922 [BCK]: Incremental backup done in 0.968132316s
12:17:10.922 [CMD]: virsh --connect qemu:///system domjobabort 0f297c4b-001f-434d-9123-739342da23fb
12:17:10.943 [CMD]: DONE
12:17:10.943 [CMD]: virsh --connect qemu:///system checkpoint-list --name 0f297c4b-001f-434d-9123-739342da23fb
12:17:10.961 [CMD]: DONE
12:17:10.962 [CMD]: qemu-img info --output json --force-share /var/lib/datastores/local_mount/0/36/disk.0.snap/0
12:17:10.967 [CMD]: DONE

Checking backup image for current chain of increments:

oneimage show 82

# [...]
BACKUP INCREMENTS
 ID PID T SIZE                DATE SOURCE
  0  -1 F 118M      05/10 12:09:50 33a8ff
  1   0 I 1M        05/10 12:17:14 e50606

Checking the current checkpoints:

source vars
virsh checkpoint-list one-$VM_ID

 Name       Creation Time
---------------------------------------
 one-36-0   2023-05-10 12:09:40 -0400
 one-36-1   2023-05-10 12:17:10 -0400

Checking bitmaps:

source vars
qemu-img info --output json --force-share /var/lib/one/datastores/0/$VM_ID/disk.0

{
    "backing-filename-format": "qcow2",
    "virtual-size": 268435456,
    "filename": "/var/lib/one/datastores/0/36/disk.0",
    "cluster-size": 65536,
    "format": "qcow2",
    "actual-size": 3346432,
    "format-specific": {
        "type": "qcow2",
        "data": {
            "compat": "1.1",
            "compression-type": "zlib",
            "lazy-refcounts": false,
            "refcount-bits": 16,
            "corrupt": false,
            "extended-l2": false
        }
    },
    "full-backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "dirty-flag": false
}

Checking bitmaps and dirty-bitmaps through virsh qemu-monitor-command:

source vars
virsh qemu-monitor-command one-$VM_ID --pretty  '{"execute": "query-block", "arguments": {}}'

{
  "return": [
    {
      "io-status": "ok",
      "device": "",
      "locked": false,
      "removable": false,
      "inserted": {
        "iops_rd": 0,
        "detect_zeroes": "off",
        "image": {
          "backing-image": {
            "virtual-size": 268435456,
            "filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
            "cluster-size": 65536,
            "format": "qcow2",
            "actual-size": 66424832,
            "format-specific": {
              "type": "qcow2",
              "data": {
                "compat": "1.1",
                "compression-type": "zlib",
                "lazy-refcounts": false,
                "refcount-bits": 16,
                "corrupt": false,
                "extended-l2": false
              }
            },
            "dirty-flag": false
          },
          "backing-filename-format": "qcow2",
          "virtual-size": 268435456,
          "filename": "/var/lib/one//datastores/0/36/disk.0",
          "cluster-size": 65536,
          "format": "qcow2",
          "actual-size": 3346432,
          "format-specific": {
            "type": "qcow2",
            "data": {
              "compat": "1.1",
              "compression-type": "zlib",
              "lazy-refcounts": false,
              "refcount-bits": 16,
              "corrupt": false,
              "extended-l2": false
            }
          },
          "full-backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
          "backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
          "dirty-flag": false
        },
        "iops_wr": 0,
        "ro": false,
        "node-name": "libvirt-2-format",
        "backing_file_depth": 1,
        "drv": "qcow2",
        "iops": 0,
        "bps_wr": 0,
        "write_threshold": 0,
        "backing_file": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
        "dirty-bitmaps": [
          {
            "name": "one-36-1",
            "recording": true,
            "persistent": true,
            "busy": false,
            "granularity": 65536,
            "count": 524288
          },
          {
            "name": "one-36-0",
            "recording": true,
            "persistent": true,
            "busy": false,
            "granularity": 65536,
            "count": 589824
          }
        ],
        "encrypted": false,
        "bps": 0,
        "bps_rd": 0,
        "cache": {
          "no-flush": false,
          "direct": true,
          "writeback": true
        },
        "file": "/var/lib/one//datastores/0/36/disk.0"
      },
      "qdev": "/machine/peripheral/virtio-disk0/virtio-backend",
      "type": "unknown"
    },
    {
      "io-status": "ok",
      "device": "",
      "locked": true,
      "removable": true,
      "inserted": {
        "iops_rd": 0,
        "detect_zeroes": "off",
        "image": {
          "virtual-size": 372736,
          "filename": "/var/lib/one//datastores/0/36/disk.1",
          "format": "raw",
          "actual-size": 372736,
          "dirty-flag": false
        },
        "iops_wr": 0,
        "ro": true,
        "node-name": "libvirt-5-format",
        "backing_file_depth": 0,
        "drv": "raw",
        "iops": 0,
        "bps_wr": 0,
        "write_threshold": 0,
        "encrypted": false,
        "bps": 0,
        "bps_rd": 0,
        "cache": {
          "no-flush": false,
          "direct": false,
          "writeback": true
        },
        "file": "/var/lib/one//datastores/0/36/disk.1"
      },
      "qdev": "ide0-0-0",
      "tray_open": false,
      "type": "unknown"
    }
  ],
  "id": "libvirt-2401"
}

1.3. ACTION#3: Backup#3 (Incremental live backup)

Create file test-img82-disk.0.2 inside the VM 36:

touch test-img82-disk.0.2

Executing backup:

source vars
onevm backup -d $DS_ID $VM_ID

File /var/log/one/36.log:

Wed May 10 12:23:10 2023 [Z0][VM][I]: New LCM state is BACKUP
Wed May 10 12:23:13 2023 [Z0][VMM][I]: Successfully execute transfer manager driver operation: prebackup_live.
Wed May 10 12:23:15 2023 [Z0][VMM][I]: Successfully execute  operation: backup.
Wed May 10 12:23:16 2023 [Z0][VMM][I]: Successfully execute transfer manager driver operation: postbackup_live.
Wed May 10 12:23:16 2023 [Z0][VMM][I]: VM backup successfully created.
Wed May 10 12:23:16 2023 [Z0][VM][I]: New LCM state is RUNNING

File /var/log/one/backup_qcow2.log:

12:23:12.061 [CMD]: virsh --connect qemu:///system checkpoint-list --name 0f297c4b-001f-434d-9123-739342da23fb
12:23:12.079 [CMD]: DONE
12:23:12.080 [CMD]: virsh --connect qemu:///system domfsfreeze 0f297c4b-001f-434d-9123-739342da23fb
12:23:12.149 [CMD]: DONE
12:23:12.152 [CMD]: qemu-img create -f qcow2 /var/lib/one//datastores/0/36/tmp/scracth.0.qcow2 256M
12:23:12.211 [CMD]: DONE
12:23:12.211 [CMD]: virsh --connect qemu:///system backup-begin --reuse-external --backupxml /var/lib/one//datastores/0/36/tmp/backup.xml --checkpointxml /var/lib/one//datastores/0/36/tmp/checkpoint.xml 0f297c4b-001f-434d-9123-739342da23fb
12:23:12.641 [CMD]: DONE
12:23:12.641 [CMD]: virsh --connect qemu:///system domfsthaw 0f297c4b-001f-434d-9123-739342da23fb
12:23:12.716 [CMD]: DONE
12:23:12.719 [CMD]: nbdinfo --json --map=qemu:dirty-bitmap:backup-vda nbd+unix:///vda?socket=/var/lib/one//datastores/0/36/backup.socket
12:23:12.726 [CMD]: DONE
12:23:12.726 [CMD]: qemu-img create -f qcow2 -F raw -b nbd+unix:///vda?socket=/var/lib/one//datastores/0/36/backup.socket /var/lib/one//datastores/0/36/backup/disk.0.2
12:23:12.794 [CMD]: DONE
12:23:12.794 [CMD]: qemu-io
12:23:13.034 [CMD]: DONE
12:23:13.034 [BCK]: Incremental backup done in 0.955029454s
12:23:13.034 [CMD]: virsh --connect qemu:///system domjobabort 0f297c4b-001f-434d-9123-739342da23fb
12:23:13.054 [CMD]: DONE
12:23:13.055 [CMD]: virsh --connect qemu:///system checkpoint-list --name 0f297c4b-001f-434d-9123-739342da23fb
12:23:13.072 [CMD]: DONE
12:23:13.072 [CMD]: qemu-img info --output json --force-share /var/lib/datastores/local_mount/0/36/disk.0.snap/0
12:23:13.077 [CMD]: DONE
12:23:13.078 [CMD]: virsh --connect qemu:///system checkpoint-delete 0f297c4b-001f-434d-9123-739342da23fb one-36-0
12:23:13.399 [CMD]: DONE

Checking backup image for current chain of increments:

oneimage show 82

# [...]
BACKUP INCREMENTS
 ID PID T SIZE                DATE SOURCE
  0  -1 F 118M      05/10 12:09:50 33a8ff
  1   0 I 1M        05/10 12:17:14 e50606
  2   1 I 1M        05/10 12:23:16 5b2aea

Checking the current checkpoints:

source vars
virsh checkpoint-list one-$VM_ID

 Name       Creation Time
---------------------------------------
 one-36-1   2023-05-10 12:17:10 -0400
 one-36-2   2023-05-10 12:23:12 -0400

Checking bitmaps:

source vars
qemu-img info --output json --force-share /var/lib/one/datastores/0/$VM_ID/disk.0

{
    "backing-filename-format": "qcow2",
    "virtual-size": 268435456,
    "filename": "/var/lib/one/datastores/0/36/disk.0",
    "cluster-size": 65536,
    "format": "qcow2",
    "actual-size": 3346432,
    "format-specific": {
        "type": "qcow2",
        "data": {
            "compat": "1.1",
            "compression-type": "zlib",
            "lazy-refcounts": false,
            "refcount-bits": 16,
            "corrupt": false,
            "extended-l2": false
        }
    },
    "full-backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "dirty-flag": false
}

Checking bitmaps and dirty-bitmaps through virsh qemu-monitor-command:

source vars
virsh qemu-monitor-command one-$VM_ID --pretty  '{"execute": "query-block", "arguments": {}}'

{
  "return": [
    {
      "io-status": "ok",
      "device": "",
      "locked": false,
      "removable": false,
      "inserted": {
        "iops_rd": 0,
        "detect_zeroes": "off",
        "image": {
          "backing-image": {
            "virtual-size": 268435456,
            "filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
            "cluster-size": 65536,
            "format": "qcow2",
            "actual-size": 66424832,
            "format-specific": {
              "type": "qcow2",
              "data": {
                "compat": "1.1",
                "compression-type": "zlib",
                "lazy-refcounts": false,
                "refcount-bits": 16,
                "corrupt": false,
                "extended-l2": false
              }
            },
            "dirty-flag": false
          },
          "backing-filename-format": "qcow2",
          "virtual-size": 268435456,
          "filename": "/var/lib/one//datastores/0/36/disk.0",
          "cluster-size": 65536,
          "format": "qcow2",
          "actual-size": 3346432,
          "format-specific": {
            "type": "qcow2",
            "data": {
              "compat": "1.1",
              "compression-type": "zlib",
              "lazy-refcounts": false,
              "refcount-bits": 16,
              "corrupt": false,
              "extended-l2": false
            }
          },
          "full-backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
          "backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
          "dirty-flag": false
        },
        "iops_wr": 0,
        "ro": false,
        "node-name": "libvirt-2-format",
        "backing_file_depth": 1,
        "drv": "qcow2",
        "iops": 0,
        "bps_wr": 0,
        "write_threshold": 0,
        "backing_file": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
        "dirty-bitmaps": [
          {
            "name": "one-36-2",
            "recording": true,
            "persistent": true,
            "busy": false,
            "granularity": 65536,
            "count": 393216
          },
          {
            "name": "one-36-1",
            "recording": true,
            "persistent": true,
            "busy": false,
            "granularity": 65536,
            "count": 589824
          }
        ],
        "encrypted": false,
        "bps": 0,
        "bps_rd": 0,
        "cache": {
          "no-flush": false,
          "direct": true,
          "writeback": true
        },
        "file": "/var/lib/one//datastores/0/36/disk.0"
      },
      "qdev": "/machine/peripheral/virtio-disk0/virtio-backend",
      "type": "unknown"
    },
    {
      "io-status": "ok",
      "device": "",
      "locked": true,
      "removable": true,
      "inserted": {
        "iops_rd": 0,
        "detect_zeroes": "off",
        "image": {
          "virtual-size": 372736,
          "filename": "/var/lib/one//datastores/0/36/disk.1",
          "format": "raw",
          "actual-size": 372736,
          "dirty-flag": false
        },
        "iops_wr": 0,
        "ro": true,
        "node-name": "libvirt-5-format",
        "backing_file_depth": 0,
        "drv": "raw",
        "iops": 0,
        "bps_wr": 0,
        "write_threshold": 0,
        "encrypted": false,
        "bps": 0,
        "bps_rd": 0,
        "cache": {
          "no-flush": false,
          "direct": false,
          "writeback": true
        },
        "file": "/var/lib/one//datastores/0/36/disk.1"
      },
      "qdev": "ide0-0-0",
      "tray_open": false,
      "type": "unknown"
    }
  ],
  "id": "libvirt-3179"
}

1.4. ACTION#4: Changing the VM state from RUNNING to POWEROFF

Create file test-img82-disk.0.3 inside the VM 36:

touch test-img82-disk.0.3

Poweroff the VM:

source vars
onevm poweroff $VM_ID

Checking bitmaps:

source vars
qemu-img info --output json --force-share /var/lib/one/datastores/0/$VM_ID/disk.0

{
    "backing-filename-format": "qcow2",
    "virtual-size": 268435456,
    "filename": "/var/lib/one/datastores/0/36/disk.0",
    "cluster-size": 65536,
    "format": "qcow2",
    "actual-size": 3346432,
    "format-specific": {
        "type": "qcow2",
        "data": {
            "compat": "1.1",
            "compression-type": "zlib",
            "lazy-refcounts": false,
            "bitmaps": [
                {
                    "flags": [
                        "auto"
                    ],
                    "name": "one-36-2",
                    "granularity": 65536
                },
                {
                    "flags": [
                        "auto"
                    ],
                    "name": "one-36-1",
                    "granularity": 65536
                }
            ],
            "refcount-bits": 16,
            "corrupt": false,
            "extended-l2": false
        }
    },
    "full-backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "dirty-flag": false
}

1.5. ACTION#5: Backup#4 (Incremental backup)

Executing backup:

source vars
onevm backup -d $DS_ID $VM_ID

File /var/log/one/36.log:

Wed May 10 12:38:28 2023 [Z0][VM][I]: New LCM state is BACKUP_POWEROFF
Wed May 10 12:38:33 2023 [Z0][VMM][I]: Successfully execute transfer manager driver operation: prebackup.
Wed May 10 12:38:35 2023 [Z0][VMM][I]: Successfully execute  operation: backup.
Wed May 10 12:38:36 2023 [Z0][VMM][I]: Successfully execute transfer manager driver operation: postbackup.
Wed May 10 12:38:36 2023 [Z0][VMM][I]: VM backup successfully created.
Wed May 10 12:38:36 2023 [Z0][VM][I]: New state is POWEROFF
Wed May 10 12:38:36 2023 [Z0][VM][I]: New LCM state is LCM_INIT

File /var/log/one/backup_qcow2.log:

12:38:31.459 [CMD]: nbdinfo --json --map=qemu:dirty-bitmap:one-36-2 nbd+unix:///?socket=/var/lib/datastores/local_mount/0/36/disk.0.snap/0.socket
12:38:31.477 [CMD]: DONE
12:38:31.478 [CMD]: qemu-img create -f qcow2 -F raw -b nbd+unix:///?socket=/var/lib/datastores/local_mount/0/36/disk.0.snap/0.socket /var/lib/one//datastores/0/36/backup/disk.0.3
12:38:31.561 [CMD]: DONE
12:38:31.561 [CMD]: qemu-io
12:38:31.870 [CMD]: DONE
12:38:33.221 [CMD]: qemu-img info --output json --force-share /var/lib/datastores/local_mount/0/36/disk.0.snap/0
12:38:33.227 [CMD]: DONE
12:38:33.227 [CMD]: qemu-img bitmap --remove /var/lib/datastores/local_mount/0/36/disk.0.snap/0 one-36-1
12:38:33.357 [CMD]: DONE
12:38:33.357 [CMD]: qemu-img bitmap --add /var/lib/datastores/local_mount/0/36/disk.0.snap/0 one-36-3
12:38:33.428 [CMD]: DONE
12:38:33.428 [BCK]: Incremental backup done in 2.972525751s

Checking backup image for current chain of increments:

oneimage show 82

# [...]
BACKUP INCREMENTS
 ID PID T SIZE                DATE SOURCE
  0  -1 F 118M      05/10 12:09:50 33a8ff
  1   0 I 1M        05/10 12:17:14 e50606
  2   1 I 1M        05/10 12:23:16 5b2aea
  3   2 I 2M        05/10 12:38:36 07bd13

Checking bitmaps:

source vars
qemu-img info --output json --force-share /var/lib/one/datastores/0/$VM_ID/disk.0

{
    "backing-filename-format": "qcow2",
    "virtual-size": 268435456,
    "filename": "/var/lib/one/datastores/0/36/disk.0",
    "cluster-size": 65536,
    "format": "qcow2",
    "actual-size": 4268032,
    "format-specific": {
        "type": "qcow2",
        "data": {
            "compat": "1.1",
            "compression-type": "zlib",
            "lazy-refcounts": false,
            "bitmaps": [
                {
                    "flags": [
                        "auto"
                    ],
                    "name": "one-36-2",
                    "granularity": 65536
                },
                {
                    "flags": [
                        "auto"
                    ],
                    "name": "one-36-3",
                    "granularity": 65536
                }
            ],
            "refcount-bits": 16,
            "corrupt": false,
            "extended-l2": false
        }
    },
    "full-backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "dirty-flag": false
}

1.6. ACTION#6: Changing the VM state from POWEROFF to RUNNING

Resume the VM:

source vars
onevm resume $VM_ID

1.7. ACTION#7: Backup#5 (Incremental live backup)

Create file test-img82-disk.0.4 inside the VM 36:

touch test-img82-disk.0.4

Executing backup:

source vars
onevm backup -d $DS_ID $VM_ID

File /var/log/one/36.log:

Wed May 10 12:43:50 2023 [Z0][VM][I]: New LCM state is BACKUP
Wed May 10 12:43:54 2023 [Z0][VMM][I]: Successfully execute transfer manager driver operation: prebackup_live.
Wed May 10 12:43:56 2023 [Z0][VMM][I]: Successfully execute  operation: backup.
Wed May 10 12:43:57 2023 [Z0][VMM][I]: Successfully execute transfer manager driver operation: postbackup_live.
Wed May 10 12:43:57 2023 [Z0][VMM][I]: VM backup successfully created.
Wed May 10 12:43:57 2023 [Z0][VM][I]: New LCM state is RUNNING

File /var/log/one/backup_qcow2.log:

12:43:52.427 [CMD]: virsh --connect qemu:///system checkpoint-list --name 0f297c4b-001f-434d-9123-739342da23fb
12:43:52.446 [CMD]: DONE
12:43:52.448 [CMD]: virsh --connect qemu:///system checkpoint-create --xmlfile /var/lib/one//datastores/0/36/tmp/checkpoint.xml --redefine 0f297c4b-001f-434d-9123-739342da23fb
12:43:52.533 [CMD]: DONE
12:43:52.534 [CMD]: virsh --connect qemu:///system domfsfreeze 0f297c4b-001f-434d-9123-739342da23fb
12:43:52.626 [CMD]: DONE
12:43:52.630 [CMD]: qemu-img create -f qcow2 /var/lib/one//datastores/0/36/tmp/scracth.0.qcow2 256M
12:43:52.706 [CMD]: DONE
12:43:52.712 [CMD]: virsh --connect qemu:///system backup-begin --reuse-external --backupxml /var/lib/one//datastores/0/36/tmp/backup.xml --checkpointxml /var/lib/one//datastores/0/36/tmp/checkpoint.xml 0f297c4b-001f-434d-9123-739342da23fb
12:43:53.128 [CMD]: DONE
12:43:53.129 [CMD]: virsh --connect qemu:///system domfsthaw 0f297c4b-001f-434d-9123-739342da23fb
12:43:53.210 [CMD]: DONE
12:43:53.212 [CMD]: nbdinfo --json --map=qemu:dirty-bitmap:backup-vda nbd+unix:///vda?socket=/var/lib/one//datastores/0/36/backup.socket
12:43:53.219 [CMD]: DONE
12:43:53.219 [CMD]: qemu-img create -f qcow2 -F raw -b nbd+unix:///vda?socket=/var/lib/one//datastores/0/36/backup.socket /var/lib/one//datastores/0/36/backup/disk.0.4
12:43:53.296 [CMD]: DONE
12:43:53.296 [CMD]: qemu-io
12:43:53.606 [CMD]: DONE
12:43:53.606 [BCK]: Incremental backup done in 1.073567356s
12:43:53.606 [CMD]: virsh --connect qemu:///system domjobabort 0f297c4b-001f-434d-9123-739342da23fb
12:43:53.628 [CMD]: DONE
12:43:53.628 [CMD]: virsh --connect qemu:///system checkpoint-list --name 0f297c4b-001f-434d-9123-739342da23fb
12:43:53.646 [CMD]: DONE
12:43:53.647 [CMD]: qemu-img info --output json --force-share /var/lib/datastores/local_mount/0/36/disk.0.snap/0
12:43:53.653 [CMD]: DONE
12:43:53.655 [CMD]: virsh --connect qemu:///system checkpoint-create --xmlfile /var/lib/one//datastores/0/36/tmp/checkpoint.xml --redefine 0f297c4b-001f-434d-9123-739342da23fb
12:43:54.007 [CMD]: DONE
12:43:54.007 [CMD]: virsh --connect qemu:///system checkpoint-delete 0f297c4b-001f-434d-9123-739342da23fb one-36-2
12:43:54.076 [CMD]: DONE

Checking backup image for current chain of increments:

oneimage show 82

# [...]
BACKUP INCREMENTS
 ID PID T SIZE                DATE SOURCE
  0  -1 F 118M      05/10 12:09:50 33a8ff
  1   0 I 1M        05/10 12:17:14 e50606
  2   1 I 1M        05/10 12:23:16 5b2aea
  3   2 I 2M        05/10 12:38:36 07bd13
  4   3 I 2M        05/10 12:43:57 e20101

Checking the current checkpoints:

source vars
virsh checkpoint-list one-$VM_ID

 Name       Creation Time
---------------------------------------
 one-36-3   2023-05-10 12:43:52 -0400
 one-36-4   2023-05-10 12:43:52 -0400

Checking bitmaps:

source vars
qemu-img info --output json --force-share /var/lib/one/datastores/0/$VM_ID/disk.0

{
    "backing-filename-format": "qcow2",
    "virtual-size": 268435456,
    "filename": "/var/lib/one/datastores/0/36/disk.0",
    "cluster-size": 65536,
    "format": "qcow2",
    "actual-size": 4268032,
    "format-specific": {
        "type": "qcow2",
        "data": {
            "compat": "1.1",
            "compression-type": "zlib",
            "lazy-refcounts": false,
            "bitmaps": [
                {
                    "flags": [
                        "in-use",
                        "auto"
                    ],
                    "name": "one-36-3",
                    "granularity": 65536
                }
            ],
            "refcount-bits": 16,
            "corrupt": false,
            "extended-l2": false
        }
    },
    "full-backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "dirty-flag": false
}

Checking bitmaps and dirty-bitmaps through virsh qemu-monitor-command:

source vars
virsh qemu-monitor-command one-$VM_ID --pretty  '{"execute": "query-block", "arguments": {}}'

{
  "return": [
    {
      "io-status": "ok",
      "device": "",
      "locked": false,
      "removable": false,
      "inserted": {
        "iops_rd": 0,
        "detect_zeroes": "off",
        "image": {
          "backing-image": {
            "virtual-size": 268435456,
            "filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
            "cluster-size": 65536,
            "format": "qcow2",
            "actual-size": 66424832,
            "format-specific": {
              "type": "qcow2",
              "data": {
                "compat": "1.1",
                "compression-type": "zlib",
                "lazy-refcounts": false,
                "refcount-bits": 16,
                "corrupt": false,
                "extended-l2": false
              }
            },
            "dirty-flag": false
          },
          "backing-filename-format": "qcow2",
          "virtual-size": 268435456,
          "filename": "/var/lib/one//datastores/0/36/disk.0",
          "cluster-size": 65536,
          "format": "qcow2",
          "actual-size": 4268032,
          "format-specific": {
            "type": "qcow2",
            "data": {
              "compat": "1.1",
              "compression-type": "zlib",
              "lazy-refcounts": false,
              "bitmaps": [
                {
                  "flags": [
                    "in-use",
                    "auto"
                  ],
                  "name": "one-36-3",
                  "granularity": 65536
                }
              ],
              "refcount-bits": 16,
              "corrupt": false,
              "extended-l2": false
            }
          },
          "full-backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
          "backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
          "dirty-flag": false
        },
        "iops_wr": 0,
        "ro": false,
        "node-name": "libvirt-2-format",
        "backing_file_depth": 1,
        "drv": "qcow2",
        "iops": 0,
        "bps_wr": 0,
        "write_threshold": 0,
        "backing_file": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
        "dirty-bitmaps": [
          {
            "name": "one-36-4",
            "recording": true,
            "persistent": true,
            "busy": false,
            "granularity": 65536,
            "count": 327680
          },
          {
            "name": "one-36-3",
            "recording": true,
            "persistent": true,
            "busy": false,
            "granularity": 65536,
            "count": 1114112
          }
        ],
        "encrypted": false,
        "bps": 0,
        "bps_rd": 0,
        "cache": {
          "no-flush": false,
          "direct": true,
          "writeback": true
        },
        "file": "/var/lib/one//datastores/0/36/disk.0"
      },
      "qdev": "/machine/peripheral/virtio-disk0/virtio-backend",
      "type": "unknown"
    },
    {
      "io-status": "ok",
      "device": "",
      "locked": true,
      "removable": true,
      "inserted": {
        "iops_rd": 0,
        "detect_zeroes": "off",
        "image": {
          "virtual-size": 372736,
          "filename": "/var/lib/one//datastores/0/36/disk.1",
          "format": "raw",
          "actual-size": 372736,
          "dirty-flag": false
        },
        "iops_wr": 0,
        "ro": true,
        "node-name": "libvirt-1-format",
        "backing_file_depth": 0,
        "drv": "raw",
        "iops": 0,
        "bps_wr": 0,
        "write_threshold": 0,
        "encrypted": false,
        "bps": 0,
        "bps_rd": 0,
        "cache": {
          "no-flush": false,
          "direct": false,
          "writeback": true
        },
        "file": "/var/lib/one//datastores/0/36/disk.1"
      },
      "qdev": "ide0-0-0",
      "tray_open": false,
      "type": "unknown"
    }
  ],
  "id": "libvirt-979"
}

1.8. ACTION#8: Backup#6 (Reset live backup)

Create file test-img83-disk.0.0 inside the VM 36:

touch test-img83-disk.0.0

Executing backup:

source vars
onevm backup --reset -d $DS_ID $VM_ID

File /var/log/one/36.log:

Wed May 10 12:48:14 2023 [Z0][VM][I]: New LCM state is BACKUP
Wed May 10 12:48:20 2023 [Z0][VMM][I]: Successfully execute transfer manager driver operation: prebackup_live.
Wed May 10 12:48:25 2023 [Z0][VMM][I]: Successfully execute  operation: backup.
Wed May 10 12:48:27 2023 [Z0][VMM][I]: Successfully execute transfer manager driver operation: postbackup_live.
Wed May 10 12:48:27 2023 [Z0][VMM][I]: VM backup successfully created.
Wed May 10 12:48:27 2023 [Z0][VM][I]: New LCM state is RUNNING

File /var/log/one/backup_qcow2.log:

12:48:16.228 [CMD]: virsh --connect qemu:///system checkpoint-list --name 0f297c4b-001f-434d-9123-739342da23fb
12:48:16.247 [CMD]: DONE
12:48:16.247 [CMD]: qemu-img info --output json --force-share /var/lib/datastores/local_mount/0/36/disk.0.snap/0
12:48:16.253 [CMD]: DONE
12:48:16.253 [CMD]: virsh --connect qemu:///system checkpoint-delete 0f297c4b-001f-434d-9123-739342da23fb one-36-3
12:48:16.314 [CMD]: DONE
12:48:16.314 [CMD]: virsh --connect qemu:///system checkpoint-delete 0f297c4b-001f-434d-9123-739342da23fb one-36-4
12:48:16.337 [CMD]: DONE
12:48:16.341 [CMD]: virsh --connect qemu:///system domfsfreeze 0f297c4b-001f-434d-9123-739342da23fb
12:48:16.401 [CMD]: DONE
12:48:16.401 [CMD]: virsh --connect qemu:///system snapshot-create-as --name one-36-backup --disk-only --atomic --diskspec vda,file=/var/lib/one//datastores/0/36/tmp/overlay_0.qcow2 0f297c4b-001f-434d-9123-739342da23fb
12:48:16.885 [CMD]: DONE
12:48:16.885 [CMD]: virsh --connect qemu:///system checkpoint-create --xmlfile /var/lib/one//datastores/0/36/tmp/checkpoint.xml 0f297c4b-001f-434d-9123-739342da23fb
12:48:16.974 [CMD]: DONE
12:48:16.974 [CMD]: virsh --connect qemu:///system domfsthaw 0f297c4b-001f-434d-9123-739342da23fb
12:48:17.104 [CMD]: DONE
12:48:17.104 [CMD]: qemu-img convert -m 4 -O qcow2 -U /var/lib/datastores/local_mount/0/36/disk.0.snap/0 /var/lib/one//datastores/0/36/backup/disk.0.0
12:48:20.059 [CMD]: DONE
12:48:20.059 [BCK]: Full backup done in 3.72172468s
12:48:26.398 [CMD]: virsh --connect qemu:///system blockcommit --base /var/lib/one//datastores/0/36/disk.0 --active --pivot --keep-relative 0f297c4b-001f-434d-9123-739342da23fb vda
12:48:27.606 [CMD]: DONE
12:48:27.609 [CMD]: virsh --connect qemu:///system snapshot-delete --snapshotname one-36-backup --metadata 0f297c4b-001f-434d-9123-739342da23fb
12:48:27.627 [CMD]: DONE

Checking backup image for current chain of increments:

oneimage show 83

# [...]
BACKUP INCREMENTS
 ID PID T SIZE                DATE SOURCE
  0  -1 F 118M      05/10 12:48:27 632618

Checking the current checkpoints:

source vars
virsh checkpoint-list one-$VM_ID

 Name       Creation Time
---------------------------------------
 one-36-0   2023-05-10 12:48:16 -0400

Checking bitmaps:

source vars
qemu-img info --output json --force-share /var/lib/one/datastores/0/$VM_ID/disk.0

{
    "backing-filename-format": "qcow2",
    "virtual-size": 268435456,
    "filename": "/var/lib/one/datastores/0/36/disk.0",
    "cluster-size": 65536,
    "format": "qcow2",
    "actual-size": 4268032,
    "format-specific": {
        "type": "qcow2",
        "data": {
            "compat": "1.1",
            "compression-type": "zlib",
            "lazy-refcounts": false,
            "refcount-bits": 16,
            "corrupt": false,
            "extended-l2": false
        }
    },
    "full-backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "dirty-flag": false
}

Checking bitmaps and dirty-bitmaps through virsh qemu-monitor-command:

source vars
virsh qemu-monitor-command one-$VM_ID --pretty  '{"execute": "query-block", "arguments": {}}'

{
  "return": [
    {
      "io-status": "ok",
      "device": "",
      "locked": false,
      "removable": false,
      "inserted": {
        "iops_rd": 0,
        "detect_zeroes": "off",
        "image": {
          "backing-image": {
            "virtual-size": 268435456,
            "filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
            "cluster-size": 65536,
            "format": "qcow2",
            "actual-size": 66424832,
            "format-specific": {
              "type": "qcow2",
              "data": {
                "compat": "1.1",
                "compression-type": "zlib",
                "lazy-refcounts": false,
                "refcount-bits": 16,
                "corrupt": false,
                "extended-l2": false
              }
            },
            "dirty-flag": false
          },
          "backing-filename-format": "qcow2",
          "virtual-size": 268435456,
          "filename": "/var/lib/one//datastores/0/36/disk.0",
          "cluster-size": 65536,
          "format": "qcow2",
          "actual-size": 4268032,
          "format-specific": {
            "type": "qcow2",
            "data": {
              "compat": "1.1",
              "compression-type": "zlib",
              "lazy-refcounts": false,
              "refcount-bits": 16,
              "corrupt": false,
              "extended-l2": false
            }
          },
          "full-backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
          "backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
          "dirty-flag": false
        },
        "iops_wr": 0,
        "ro": false,
        "node-name": "libvirt-2-format",
        "backing_file_depth": 1,
        "drv": "qcow2",
        "iops": 0,
        "bps_wr": 0,
        "write_threshold": 0,
        "backing_file": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
        "dirty-bitmaps": [
          {
            "name": "one-36-0",
            "recording": true,
            "persistent": true,
            "busy": false,
            "granularity": 65536,
            "count": 327680
          }
        ],
        "encrypted": false,
        "bps": 0,
        "bps_rd": 0,
        "cache": {
          "no-flush": false,
          "direct": true,
          "writeback": true
        },
        "file": "/var/lib/one//datastores/0/36/disk.0"
      },
      "qdev": "/machine/peripheral/virtio-disk0/virtio-backend",
      "type": "unknown"
    },
    {
      "io-status": "ok",
      "device": "",
      "locked": true,
      "removable": true,
      "inserted": {
        "iops_rd": 0,
        "detect_zeroes": "off",
        "image": {
          "virtual-size": 372736,
          "filename": "/var/lib/one//datastores/0/36/disk.1",
          "format": "raw",
          "actual-size": 372736,
          "dirty-flag": false
        },
        "iops_wr": 0,
        "ro": true,
        "node-name": "libvirt-1-format",
        "backing_file_depth": 0,
        "drv": "raw",
        "iops": 0,
        "bps_wr": 0,
        "write_threshold": 0,
        "encrypted": false,
        "bps": 0,
        "bps_rd": 0,
        "cache": {
          "no-flush": false,
          "direct": false,
          "writeback": true
        },
        "file": "/var/lib/one//datastores/0/36/disk.1"
      },
      "qdev": "ide0-0-0",
      "tray_open": false,
      "type": "unknown"
    }
  ],
  "id": "libvirt-1444"
}

1.9. ACTION#9: Backup#7 (Incremental live backup)

Create file test-img83-disk.0.1 inside the VM 36:

touch test-img83-disk.0.1

Executing backup:

source vars
onevm backup -d $DS_ID $VM_ID

File /var/log/one/36.log:

Wed May 10 12:51:59 2023 [Z0][VM][I]: New LCM state is BACKUP
Wed May 10 12:52:02 2023 [Z0][VMM][I]: Successfully execute transfer manager driver operation: prebackup_live.
Wed May 10 12:52:04 2023 [Z0][VMM][I]: Successfully execute  operation: backup.
Wed May 10 12:52:06 2023 [Z0][VMM][I]: Successfully execute transfer manager driver operation: postbackup_live.
Wed May 10 12:52:06 2023 [Z0][VMM][I]: VM backup successfully created.
Wed May 10 12:52:06 2023 [Z0][VM][I]: New LCM state is RUNNING

File /var/log/one/backup_qcow2.log:

12:52:01.676 [CMD]: virsh --connect qemu:///system checkpoint-list --name 0f297c4b-001f-434d-9123-739342da23fb
12:52:01.694 [CMD]: DONE
12:52:01.695 [CMD]: virsh --connect qemu:///system domfsfreeze 0f297c4b-001f-434d-9123-739342da23fb
12:52:01.769 [CMD]: DONE
12:52:01.772 [CMD]: qemu-img create -f qcow2 /var/lib/one//datastores/0/36/tmp/scracth.0.qcow2 256M
12:52:01.853 [CMD]: DONE
12:52:01.854 [CMD]: virsh --connect qemu:///system backup-begin --reuse-external --backupxml /var/lib/one//datastores/0/36/tmp/backup.xml --checkpointxml /var/lib/one//datastores/0/36/tmp/checkpoint.xml 0f297c4b-001f-434d-9123-739342da23fb
12:52:02.279 [CMD]: DONE
12:52:02.279 [CMD]: virsh --connect qemu:///system domfsthaw 0f297c4b-001f-434d-9123-739342da23fb
12:52:02.358 [CMD]: DONE
12:52:02.361 [CMD]: nbdinfo --json --map=qemu:dirty-bitmap:backup-vda nbd+unix:///vda?socket=/var/lib/one//datastores/0/36/backup.socket
12:52:02.367 [CMD]: DONE
12:52:02.368 [CMD]: qemu-img create -f qcow2 -F raw -b nbd+unix:///vda?socket=/var/lib/one//datastores/0/36/backup.socket /var/lib/one//datastores/0/36/backup/disk.0.1
12:52:02.432 [CMD]: DONE
12:52:02.432 [CMD]: qemu-io
12:52:02.682 [CMD]: DONE
12:52:02.682 [BCK]: Incremental backup done in 0.987873826s
12:52:02.682 [CMD]: virsh --connect qemu:///system domjobabort 0f297c4b-001f-434d-9123-739342da23fb
12:52:02.703 [CMD]: DONE
12:52:02.703 [CMD]: virsh --connect qemu:///system checkpoint-list --name 0f297c4b-001f-434d-9123-739342da23fb
12:52:02.721 [CMD]: DONE
12:52:02.722 [CMD]: qemu-img info --output json --force-share /var/lib/datastores/local_mount/0/36/disk.0.snap/0
12:52:02.727 [CMD]: DONE

Checking backup image for current chain of increments:

oneimage show 83

# [...]
BACKUP INCREMENTS
 ID PID T SIZE                DATE SOURCE
  0  -1 F 118M      05/10 12:48:27 632618
  1   0 I 2M        05/10 12:52:06 b89012

Checking the current checkpoints:

source vars
virsh checkpoint-list one-$VM_ID

 Name       Creation Time
---------------------------------------
 one-36-0   2023-05-10 12:48:16 -0400
 one-36-1   2023-05-10 12:52:01 -0400

Checking bitmaps:

source vars
qemu-img info --output json --force-share /var/lib/one/datastores/0/$VM_ID/disk.0

{
    "backing-filename-format": "qcow2",
    "virtual-size": 268435456,
    "filename": "/var/lib/one/datastores/0/36/disk.0",
    "cluster-size": 65536,
    "format": "qcow2",
    "actual-size": 4268032,
    "format-specific": {
        "type": "qcow2",
        "data": {
            "compat": "1.1",
            "compression-type": "zlib",
            "lazy-refcounts": false,
            "refcount-bits": 16,
            "corrupt": false,
            "extended-l2": false
        }
    },
    "full-backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "dirty-flag": false
}

Checking bitmaps and dirty-bitmaps through virsh qemu-monitor-command:

source vars
virsh qemu-monitor-command one-$VM_ID --pretty  '{"execute": "query-block", "arguments": {}}'

{
  "return": [
    {
      "io-status": "ok",
      "device": "",
      "locked": false,
      "removable": false,
      "inserted": {
        "iops_rd": 0,
        "detect_zeroes": "off",
        "image": {
          "backing-image": {
            "virtual-size": 268435456,
            "filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
            "cluster-size": 65536,
            "format": "qcow2",
            "actual-size": 66424832,
            "format-specific": {
              "type": "qcow2",
              "data": {
                "compat": "1.1",
                "compression-type": "zlib",
                "lazy-refcounts": false,
                "refcount-bits": 16,
                "corrupt": false,
                "extended-l2": false
              }
            },
            "dirty-flag": false
          },
          "backing-filename-format": "qcow2",
          "virtual-size": 268435456,
          "filename": "/var/lib/one//datastores/0/36/disk.0",
          "cluster-size": 65536,
          "format": "qcow2",
          "actual-size": 4268032,
          "format-specific": {
            "type": "qcow2",
            "data": {
              "compat": "1.1",
              "compression-type": "zlib",
              "lazy-refcounts": false,
              "refcount-bits": 16,
              "corrupt": false,
              "extended-l2": false
            }
          },
          "full-backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
          "backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
          "dirty-flag": false
        },
        "iops_wr": 0,
        "ro": false,
        "node-name": "libvirt-2-format",
        "backing_file_depth": 1,
        "drv": "qcow2",
        "iops": 0,
        "bps_wr": 0,
        "write_threshold": 0,
        "backing_file": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
        "dirty-bitmaps": [
          {
            "name": "one-36-1",
            "recording": true,
            "persistent": true,
            "busy": false,
            "granularity": 65536,
            "count": 327680
          },
          {
            "name": "one-36-0",
            "recording": true,
            "persistent": true,
            "busy": false,
            "granularity": 65536,
            "count": 720896
          }
        ],
        "encrypted": false,
        "bps": 0,
        "bps_rd": 0,
        "cache": {
          "no-flush": false,
          "direct": true,
          "writeback": true
        },
        "file": "/var/lib/one//datastores/0/36/disk.0"
      },
      "qdev": "/machine/peripheral/virtio-disk0/virtio-backend",
      "type": "unknown"
    },
    {
      "io-status": "ok",
      "device": "",
      "locked": true,
      "removable": true,
      "inserted": {
        "iops_rd": 0,
        "detect_zeroes": "off",
        "image": {
          "virtual-size": 372736,
          "filename": "/var/lib/one//datastores/0/36/disk.1",
          "format": "raw",
          "actual-size": 372736,
          "dirty-flag": false
        },
        "iops_wr": 0,
        "ro": true,
        "node-name": "libvirt-1-format",
        "backing_file_depth": 0,
        "drv": "raw",
        "iops": 0,
        "bps_wr": 0,
        "write_threshold": 0,
        "encrypted": false,
        "bps": 0,
        "bps_rd": 0,
        "cache": {
          "no-flush": false,
          "direct": false,
          "writeback": true
        },
        "file": "/var/lib/one//datastores/0/36/disk.1"
      },
      "qdev": "ide0-0-0",
      "tray_open": false,
      "type": "unknown"
    }
  ],
  "id": "libvirt-1870"
}

1.10 ACTION#10: Backup#8 (Incremental live backup)

Create file test-img83-disk.0.2 inside the VM 36:

touch test-img83-disk.0.2

Executing backup:

source vars
onevm backup -d $DS_ID $VM_ID

File /var/log/one/36.log:

Wed May 10 12:54:24 2023 [Z0][VM][I]: New LCM state is BACKUP
Wed May 10 12:54:28 2023 [Z0][VMM][I]: Successfully execute transfer manager driver operation: prebackup_live.
Wed May 10 12:54:30 2023 [Z0][VMM][I]: Successfully execute  operation: backup.
Wed May 10 12:54:31 2023 [Z0][VMM][I]: Successfully execute transfer manager driver operation: postbackup_live.
Wed May 10 12:54:31 2023 [Z0][VMM][I]: VM backup successfully created.
Wed May 10 12:54:31 2023 [Z0][VM][I]: New LCM state is RUNNING

File /var/log/one/backup_qcow2.log:

12:54:26.560 [CMD]: virsh --connect qemu:///system checkpoint-list --name 0f297c4b-001f-434d-9123-739342da23fb
12:54:26.578 [CMD]: DONE
12:54:26.579 [CMD]: virsh --connect qemu:///system domfsfreeze 0f297c4b-001f-434d-9123-739342da23fb
12:54:26.673 [CMD]: DONE
12:54:26.677 [CMD]: qemu-img create -f qcow2 /var/lib/one//datastores/0/36/tmp/scracth.0.qcow2 256M
12:54:26.769 [CMD]: DONE
12:54:26.770 [CMD]: virsh --connect qemu:///system backup-begin --reuse-external --backupxml /var/lib/one//datastores/0/36/tmp/backup.xml --checkpointxml /var/lib/one//datastores/0/36/tmp/checkpoint.xml 0f297c4b-001f-434d-9123-739342da23fb
12:54:27.216 [CMD]: DONE
12:54:27.216 [CMD]: virsh --connect qemu:///system domfsthaw 0f297c4b-001f-434d-9123-739342da23fb
12:54:27.316 [CMD]: DONE
12:54:27.319 [CMD]: nbdinfo --json --map=qemu:dirty-bitmap:backup-vda nbd+unix:///vda?socket=/var/lib/one//datastores/0/36/backup.socket
12:54:27.326 [CMD]: DONE
12:54:27.326 [CMD]: qemu-img create -f qcow2 -F raw -b nbd+unix:///vda?socket=/var/lib/one//datastores/0/36/backup.socket /var/lib/one//datastores/0/36/backup/disk.0.2
12:54:27.399 [CMD]: DONE
12:54:27.399 [CMD]: qemu-io
12:54:27.654 [CMD]: DONE
12:54:27.654 [BCK]: Incremental backup done in 1.076439023s
12:54:27.654 [CMD]: virsh --connect qemu:///system domjobabort 0f297c4b-001f-434d-9123-739342da23fb
12:54:27.675 [CMD]: DONE
12:54:27.675 [CMD]: virsh --connect qemu:///system checkpoint-list --name 0f297c4b-001f-434d-9123-739342da23fb
12:54:27.693 [CMD]: DONE
12:54:27.694 [CMD]: qemu-img info --output json --force-share /var/lib/datastores/local_mount/0/36/disk.0.snap/0
12:54:27.699 [CMD]: DONE
12:54:27.699 [CMD]: virsh --connect qemu:///system checkpoint-delete 0f297c4b-001f-434d-9123-739342da23fb one-36-0
12:54:28.021 [CMD]: DONE

Checking backup image for current chain of increments:

oneimage show 83

# [...]
BACKUP INCREMENTS
 ID PID T SIZE                DATE SOURCE
  0  -1 F 118M      05/10 12:48:27 632618
  1   0 I 2M        05/10 12:52:06 b89012
  2   1 I 1M        05/10 12:54:31 519058

Checking the current checkpoints:

source vars
virsh checkpoint-list one-$VM_ID

 Name       Creation Time
---------------------------------------
 one-36-1   2023-05-10 12:52:01 -0400
 one-36-2   2023-05-10 12:54:26 -0400

Checking bitmaps:

source vars
qemu-img info --output json --force-share /var/lib/one/datastores/0/$VM_ID/disk.0

{
    "backing-filename-format": "qcow2",
    "virtual-size": 268435456,
    "filename": "/var/lib/one/datastores/0/36/disk.0",
    "cluster-size": 65536,
    "format": "qcow2",
    "actual-size": 4268032,
    "format-specific": {
        "type": "qcow2",
        "data": {
            "compat": "1.1",
            "compression-type": "zlib",
            "lazy-refcounts": false,
            "refcount-bits": 16,
            "corrupt": false,
            "extended-l2": false
        }
    },
    "full-backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "dirty-flag": false
}

Checking bitmaps and dirty-bitmaps through virsh qemu-monitor-command:

source vars
virsh qemu-monitor-command one-$VM_ID --pretty  '{"execute": "query-block", "arguments": {}}'

{
  "return": [
    {
      "io-status": "ok",
      "device": "",
      "locked": false,
      "removable": false,
      "inserted": {
        "iops_rd": 0,
        "detect_zeroes": "off",
        "image": {
          "backing-image": {
            "virtual-size": 268435456,
            "filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
            "cluster-size": 65536,
            "format": "qcow2",
            "actual-size": 66424832,
            "format-specific": {
              "type": "qcow2",
              "data": {
                "compat": "1.1",
                "compression-type": "zlib",
                "lazy-refcounts": false,
                "refcount-bits": 16,
                "corrupt": false,
                "extended-l2": false
              }
            },
            "dirty-flag": false
          },
          "backing-filename-format": "qcow2",
          "virtual-size": 268435456,
          "filename": "/var/lib/one//datastores/0/36/disk.0",
          "cluster-size": 65536,
          "format": "qcow2",
          "actual-size": 4268032,
          "format-specific": {
            "type": "qcow2",
            "data": {
              "compat": "1.1",
              "compression-type": "zlib",
              "lazy-refcounts": false,
              "refcount-bits": 16,
              "corrupt": false,
              "extended-l2": false
            }
          },
          "full-backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
          "backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
          "dirty-flag": false
        },
        "iops_wr": 0,
        "ro": false,
        "node-name": "libvirt-2-format",
        "backing_file_depth": 1,
        "drv": "qcow2",
        "iops": 0,
        "bps_wr": 0,
        "write_threshold": 0,
        "backing_file": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
        "dirty-bitmaps": [
          {
            "name": "one-36-2",
            "recording": true,
            "persistent": true,
            "busy": false,
            "granularity": 65536,
            "count": 327680
          },
          {
            "name": "one-36-1",
            "recording": true,
            "persistent": true,
            "busy": false,
            "granularity": 65536,
            "count": 589824
          }
        ],
        "encrypted": false,
        "bps": 0,
        "bps_rd": 0,
        "cache": {
          "no-flush": false,
          "direct": true,
          "writeback": true
        },
        "file": "/var/lib/one//datastores/0/36/disk.0"
      },
      "qdev": "/machine/peripheral/virtio-disk0/virtio-backend",
      "type": "unknown"
    },
    {
      "io-status": "ok",
      "device": "",
      "locked": true,
      "removable": true,
      "inserted": {
        "iops_rd": 0,
        "detect_zeroes": "off",
        "image": {
          "virtual-size": 372736,
          "filename": "/var/lib/one//datastores/0/36/disk.1",
          "format": "raw",
          "actual-size": 372736,
          "dirty-flag": false
        },
        "iops_wr": 0,
        "ro": true,
        "node-name": "libvirt-1-format",
        "backing_file_depth": 0,
        "drv": "raw",
        "iops": 0,
        "bps_wr": 0,
        "write_threshold": 0,
        "encrypted": false,
        "bps": 0,
        "bps_rd": 0,
        "cache": {
          "no-flush": false,
          "direct": false,
          "writeback": true
        },
        "file": "/var/lib/one//datastores/0/36/disk.1"
      },
      "qdev": "ide0-0-0",
      "tray_open": false,
      "type": "unknown"
    }
  ],
  "id": "libvirt-2249"
}

1.11 ACTION#11: Changing the VM state from RUNNING to POWEROFF

Create file test-img84-disk.0.0 inside the VM 36:

touch test-img84-disk.0.0

Poweroff the VM:

source vars
onevm poweroff $VM_ID

Checking bitmaps:

source vars
qemu-img info --output json --force-share /var/lib/one/datastores/0/$VM_ID/disk.0

{
    "backing-filename-format": "qcow2",
    "virtual-size": 268435456,
    "filename": "/var/lib/one/datastores/0/36/disk.0",
    "cluster-size": 65536,
    "format": "qcow2",
    "actual-size": 4333568,
    "format-specific": {
        "type": "qcow2",
        "data": {
            "compat": "1.1",
            "compression-type": "zlib",
            "lazy-refcounts": false,
            "bitmaps": [
                {
                    "flags": [
                        "auto"
                    ],
                    "name": "one-36-2",
                    "granularity": 65536
                },
                {
                    "flags": [
                        "auto"
                    ],
                    "name": "one-36-1",
                    "granularity": 65536
                }
            ],
            "refcount-bits": 16,
            "corrupt": false,
            "extended-l2": false
        }
    },
    "full-backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "dirty-flag": false
}

1.12. ACTION#12: Backup#9 (Reset backup)

Executing backup:

source vars
onevm backup --reset -d $DS_ID $VM_ID

File /var/log/one/36.log:

Wed May 10 13:01:03 2023 [Z0][VM][I]: New LCM state is BACKUP_POWEROFF
Wed May 10 13:01:09 2023 [Z0][VMM][I]: Successfully execute transfer manager driver operation: prebackup.
Wed May 10 13:01:15 2023 [Z0][VMM][I]: Successfully execute  operation: backup.
Wed May 10 13:01:16 2023 [Z0][VMM][I]: Successfully execute transfer manager driver operation: postbackup.
Wed May 10 13:01:16 2023 [Z0][VMM][I]: VM backup successfully created.
Wed May 10 13:01:16 2023 [Z0][VM][I]: New state is POWEROFF
Wed May 10 13:01:16 2023 [Z0][VM][I]: New LCM state is LCM_INIT

File /var/log/one/backup_qcow2.log:

13:01:06.376 [CMD]: qemu-img convert -m 4 -O qcow2 -U /var/lib/datastores/local_mount/0/36/disk.0.snap/0 /var/lib/one//datastores/0/36/backup/disk.0.0
13:01:09.089 [CMD]: DONE
13:01:09.089 [CMD]: qemu-img info --output json --force-share /var/lib/datastores/local_mount/0/36/disk.0.snap/0
13:01:09.095 [CMD]: DONE
13:01:09.095 [CMD]: qemu-img bitmap --remove /var/lib/datastores/local_mount/0/36/disk.0.snap/0 one-36-2
13:01:09.221 [CMD]: DONE
13:01:09.221 [CMD]: qemu-img bitmap --remove /var/lib/datastores/local_mount/0/36/disk.0.snap/0 one-36-1
13:01:09.273 [CMD]: DONE
13:01:09.274 [CMD]: qemu-img bitmap --add /var/lib/datastores/local_mount/0/36/disk.0.snap/0 one-36-0
13:01:09.309 [CMD]: DONE
13:01:09.310 [BCK]: Full backup done in 2.935092613s

Checking backup image for current chain of increments:

oneimage show 84

# [...]
BACKUP INCREMENTS
 ID PID T SIZE                DATE SOURCE
  0  -1 F 118M      05/10 13:01:16 e6c778

Checking bitmaps:

source vars
qemu-img info --output json --force-share /var/lib/one/datastores/0/$VM_ID/disk.0

{
    "backing-filename-format": "qcow2",
    "virtual-size": 268435456,
    "filename": "/var/lib/one/datastores/0/36/disk.0",
    "cluster-size": 65536,
    "format": "qcow2",
    "actual-size": 4268032,
    "format-specific": {
        "type": "qcow2",
        "data": {
            "compat": "1.1",
            "compression-type": "zlib",
            "lazy-refcounts": false,
            "bitmaps": [
                {
                    "flags": [
                        "auto"
                    ],
                    "name": "one-36-0",
                    "granularity": 65536
                }
            ],
            "refcount-bits": 16,
            "corrupt": false,
            "extended-l2": false
        }
    },
    "full-backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "dirty-flag": false
}

1.13 ACTION#13: Changing the VM state from POWEROFF to RUNNING

Resume the VM:

source vars
onevm resume $VM_ID

1.14 ACTION#14: Backup#10 (Incremental live backup)

Create file test-img84-disk.0.1 inside the VM 36:

touch test-img84-disk.0.1

Executing backup:

source vars
onevm backup -d $DS_ID $VM_ID

File /var/log/one/36.log:

Wed May 10 13:06:52 2023 [Z0][VM][I]: New LCM state is BACKUP
Wed May 10 13:06:55 2023 [Z0][VMM][I]: Successfully execute transfer manager driver operation: prebackup_live.
Wed May 10 13:06:58 2023 [Z0][VMM][I]: Successfully execute  operation: backup.
Wed May 10 13:06:59 2023 [Z0][VMM][I]: Successfully execute transfer manager driver operation: postbackup_live.
Wed May 10 13:06:59 2023 [Z0][VMM][I]: VM backup successfully created.
Wed May 10 13:06:59 2023 [Z0][VM][I]: New LCM state is RUNNING

File /var/log/one/backup_qcow2.log:

13:06:54.757 [CMD]: virsh --connect qemu:///system checkpoint-list --name 0f297c4b-001f-434d-9123-739342da23fb
13:06:54.776 [CMD]: DONE
13:06:54.779 [CMD]: virsh --connect qemu:///system checkpoint-create --xmlfile /var/lib/one//datastores/0/36/tmp/checkpoint.xml --redefine 0f297c4b-001f-434d-9123-739342da23fb
13:06:54.856 [CMD]: DONE
13:06:54.857 [CMD]: virsh --connect qemu:///system domfsfreeze 0f297c4b-001f-434d-9123-739342da23fb
13:06:54.946 [CMD]: DONE
13:06:54.950 [CMD]: qemu-img create -f qcow2 /var/lib/one//datastores/0/36/tmp/scracth.0.qcow2 256M
13:06:55.018 [CMD]: DONE
13:06:55.031 [CMD]: virsh --connect qemu:///system backup-begin --reuse-external --backupxml /var/lib/one//datastores/0/36/tmp/backup.xml --checkpointxml /var/lib/one//datastores/0/36/tmp/checkpoint.xml 0f297c4b-001f-434d-9123-739342da23fb
13:06:55.453 [CMD]: DONE
13:06:55.453 [CMD]: virsh --connect qemu:///system domfsthaw 0f297c4b-001f-434d-9123-739342da23fb
13:06:55.534 [CMD]: DONE
13:06:55.537 [CMD]: nbdinfo --json --map=qemu:dirty-bitmap:backup-vda nbd+unix:///vda?socket=/var/lib/one//datastores/0/36/backup.socket
13:06:55.554 [CMD]: DONE
13:06:55.554 [CMD]: qemu-img create -f qcow2 -F raw -b nbd+unix:///vda?socket=/var/lib/one//datastores/0/36/backup.socket /var/lib/one//datastores/0/36/backup/disk.0.1
13:06:55.625 [CMD]: DONE
13:06:55.625 [CMD]: qemu-io
13:06:55.918 [CMD]: DONE
13:06:55.918 [BCK]: Incremental backup done in 1.061833988s
13:06:55.918 [CMD]: virsh --connect qemu:///system domjobabort 0f297c4b-001f-434d-9123-739342da23fb
13:06:55.939 [CMD]: DONE
13:06:55.939 [CMD]: virsh --connect qemu:///system checkpoint-list --name 0f297c4b-001f-434d-9123-739342da23fb
13:06:55.957 [CMD]: DONE
13:06:55.958 [CMD]: qemu-img info --output json --force-share /var/lib/datastores/local_mount/0/36/disk.0.snap/0
13:06:55.964 [CMD]: DONE

Checking backup image for current chain of increments:

oneimage show 84

# [...]
BACKUP INCREMENTS
 ID PID T SIZE                DATE SOURCE
  0  -1 F 118M      05/10 13:01:16 e6c778
  1   0 I 2M        05/10 13:06:59 813641

Checking the current checkpoints:

source vars
virsh checkpoint-list one-$VM_ID

 Name       Creation Time
---------------------------------------
 one-36-0   2023-05-10 13:06:54 -0400
 one-36-1   2023-05-10 13:06:55 -0400

Checking bitmaps:

source vars
qemu-img info --output json --force-share /var/lib/one/datastores/0/$VM_ID/disk.0

{
    "backing-filename-format": "qcow2",
    "virtual-size": 268435456,
    "filename": "/var/lib/one/datastores/0/36/disk.0",
    "cluster-size": 65536,
    "format": "qcow2",
    "actual-size": 4268032,
    "format-specific": {
        "type": "qcow2",
        "data": {
            "compat": "1.1",
            "compression-type": "zlib",
            "lazy-refcounts": false,
            "bitmaps": [
                {
                    "flags": [
                        "in-use",
                        "auto"
                    ],
                    "name": "one-36-0",
                    "granularity": 65536
                }
            ],
            "refcount-bits": 16,
            "corrupt": false,
            "extended-l2": false
        }
    },
    "full-backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "dirty-flag": false
}

Checking bitmaps and dirty-bitmaps through virsh qemu-monitor-command:

source vars
virsh qemu-monitor-command one-$VM_ID --pretty  '{"execute": "query-block", "arguments": {}}'

{
  "return": [
    {
      "io-status": "ok",
      "device": "",
      "locked": false,
      "removable": false,
      "inserted": {
        "iops_rd": 0,
        "detect_zeroes": "off",
        "image": {
          "backing-image": {
            "virtual-size": 268435456,
            "filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
            "cluster-size": 65536,
            "format": "qcow2",
            "actual-size": 66424832,
            "format-specific": {
              "type": "qcow2",
              "data": {
                "compat": "1.1",
                "compression-type": "zlib",
                "lazy-refcounts": false,
                "refcount-bits": 16,
                "corrupt": false,
                "extended-l2": false
              }
            },
            "dirty-flag": false
          },
          "backing-filename-format": "qcow2",
          "virtual-size": 268435456,
          "filename": "/var/lib/one//datastores/0/36/disk.0",
          "cluster-size": 65536,
          "format": "qcow2",
          "actual-size": 4268032,
          "format-specific": {
            "type": "qcow2",
            "data": {
              "compat": "1.1",
              "compression-type": "zlib",
              "lazy-refcounts": false,
              "bitmaps": [
                {
                  "flags": [
                    "in-use",
                    "auto"
                  ],
                  "name": "one-36-0",
                  "granularity": 65536
                }
              ],
              "refcount-bits": 16,
              "corrupt": false,
              "extended-l2": false
            }
          },
          "full-backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
          "backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
          "dirty-flag": false
        },
        "iops_wr": 0,
        "ro": false,
        "node-name": "libvirt-2-format",
        "backing_file_depth": 1,
        "drv": "qcow2",
        "iops": 0,
        "bps_wr": 0,
        "write_threshold": 0,
        "backing_file": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
        "dirty-bitmaps": [
          {
            "name": "one-36-1",
            "recording": true,
            "persistent": true,
            "busy": false,
            "granularity": 65536,
            "count": 327680
          },
          {
            "name": "one-36-0",
            "recording": true,
            "persistent": true,
            "busy": false,
            "granularity": 65536,
            "count": 1179648
          }
        ],
        "encrypted": false,
        "bps": 0,
        "bps_rd": 0,
        "cache": {
          "no-flush": false,
          "direct": true,
          "writeback": true
        },
        "file": "/var/lib/one//datastores/0/36/disk.0"
      },
      "qdev": "/machine/peripheral/virtio-disk0/virtio-backend",
      "type": "unknown"
    },
    {
      "io-status": "ok",
      "device": "",
      "locked": true,
      "removable": true,
      "inserted": {
        "iops_rd": 0,
        "detect_zeroes": "off",
        "image": {
          "virtual-size": 372736,
          "filename": "/var/lib/one//datastores/0/36/disk.1",
          "format": "raw",
          "actual-size": 372736,
          "dirty-flag": false
        },
        "iops_wr": 0,
        "ro": true,
        "node-name": "libvirt-1-format",
        "backing_file_depth": 0,
        "drv": "raw",
        "iops": 0,
        "bps_wr": 0,
        "write_threshold": 0,
        "encrypted": false,
        "bps": 0,
        "bps_rd": 0,
        "cache": {
          "no-flush": false,
          "direct": false,
          "writeback": true
        },
        "file": "/var/lib/one//datastores/0/36/disk.1"
      },
      "qdev": "ide0-0-0",
      "tray_open": false,
      "type": "unknown"
    }
  ],
  "id": "libvirt-872"
}

1.15. ACTION#15: Backup#11 (Reset live backup)

Create file test-img85-disk.0.0 inside the VM 36:

touch test-img85-disk.0.0

Executing backup:

source vars
onevm backup --reset -d $DS_ID $VM_ID

File /var/log/one/36.log:

Wed May 10 13:10:34 2023 [Z0][VM][I]: New LCM state is BACKUP
Wed May 10 13:10:39 2023 [Z0][VMM][I]: Successfully execute transfer manager driver operation: prebackup_live.
Wed May 10 13:10:44 2023 [Z0][VMM][I]: Successfully execute  operation: backup.
Wed May 10 13:10:46 2023 [Z0][VMM][I]: Successfully execute transfer manager driver operation: postbackup_live.
Wed May 10 13:10:46 2023 [Z0][VMM][I]: VM backup successfully created.
Wed May 10 13:10:46 2023 [Z0][VM][I]: New LCM state is RUNNING

File /var/log/one/backup_qcow2.log:

13:10:36.589 [CMD]: virsh --connect qemu:///system checkpoint-list --name 0f297c4b-001f-434d-9123-739342da23fb
13:10:36.608 [CMD]: DONE
13:10:36.609 [CMD]: qemu-img info --output json --force-share /var/lib/datastores/local_mount/0/36/disk.0.snap/0
13:10:36.614 [CMD]: DONE
13:10:36.614 [CMD]: virsh --connect qemu:///system checkpoint-delete 0f297c4b-001f-434d-9123-739342da23fb one-36-0
13:10:36.680 [CMD]: DONE
13:10:36.680 [CMD]: virsh --connect qemu:///system checkpoint-delete 0f297c4b-001f-434d-9123-739342da23fb one-36-1
13:10:36.702 [CMD]: DONE
13:10:36.706 [CMD]: virsh --connect qemu:///system domfsfreeze 0f297c4b-001f-434d-9123-739342da23fb
13:10:36.766 [CMD]: DONE
13:10:36.766 [CMD]: virsh --connect qemu:///system snapshot-create-as --name one-36-backup --disk-only --atomic --diskspec vda,file=/var/lib/one//datastores/0/36/tmp/overlay_0.qcow2 0f297c4b-001f-434d-9123-739342da23fb
13:10:37.230 [CMD]: DONE
13:10:37.230 [CMD]: virsh --connect qemu:///system checkpoint-create --xmlfile /var/lib/one//datastores/0/36/tmp/checkpoint.xml 0f297c4b-001f-434d-9123-739342da23fb
13:10:37.314 [CMD]: DONE
13:10:37.314 [CMD]: virsh --connect qemu:///system domfsthaw 0f297c4b-001f-434d-9123-739342da23fb
13:10:37.421 [CMD]: DONE
13:10:37.421 [CMD]: qemu-img convert -m 4 -O qcow2 -U /var/lib/datastores/local_mount/0/36/disk.0.snap/0 /var/lib/one//datastores/0/36/backup/disk.0.0
13:10:39.711 [CMD]: DONE
13:10:39.711 [BCK]: Full backup done in 3.008608014s
13:10:45.558 [CMD]: virsh --connect qemu:///system blockcommit --base /var/lib/one//datastores/0/36/disk.0 --active --pivot --keep-relative 0f297c4b-001f-434d-9123-739342da23fb vda
13:10:46.777 [CMD]: DONE
13:10:46.777 [CMD]: virsh --connect qemu:///system snapshot-delete --snapshotname one-36-backup --metadata 0f297c4b-001f-434d-9123-739342da23fb
13:10:46.796 [CMD]: DONE

Checking backup image for current chain of increments:

oneimage show 85

# [...]
BACKUP INCREMENTS
 ID PID T SIZE                DATE SOURCE
  0  -1 F 118M      05/10 13:10:46 e727b6

Checking the current checkpoints:

source vars
virsh checkpoint-list one-$VM_ID

 Name       Creation Time
---------------------------------------
 one-36-0   2023-05-10 13:10:37 -0400

Checking bitmaps:

source vars
qemu-img info --output json --force-share /var/lib/one/datastores/0/$VM_ID/disk.0

{
    "backing-filename-format": "qcow2",
    "virtual-size": 268435456,
    "filename": "/var/lib/one/datastores/0/36/disk.0",
    "cluster-size": 65536,
    "format": "qcow2",
    "actual-size": 4268032,
    "format-specific": {
        "type": "qcow2",
        "data": {
            "compat": "1.1",
            "compression-type": "zlib",
            "lazy-refcounts": false,
            "refcount-bits": 16,
            "corrupt": false,
            "extended-l2": false
        }
    },
    "full-backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
    "dirty-flag": false
}

Checking bitmaps and dirty-bitmaps through virsh qemu-monitor-command:

source vars
virsh qemu-monitor-command one-$VM_ID --pretty  '{"execute": "query-block", "arguments": {}}'

{
  "return": [
    {
      "io-status": "ok",
      "device": "",
      "locked": false,
      "removable": false,
      "inserted": {
        "iops_rd": 0,
        "detect_zeroes": "off",
        "image": {
          "backing-image": {
            "virtual-size": 268435456,
            "filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
            "cluster-size": 65536,
            "format": "qcow2",
            "actual-size": 66424832,
            "format-specific": {
              "type": "qcow2",
              "data": {
                "compat": "1.1",
                "compression-type": "zlib",
                "lazy-refcounts": false,
                "refcount-bits": 16,
                "corrupt": false,
                "extended-l2": false
              }
            },
            "dirty-flag": false
          },
          "backing-filename-format": "qcow2",
          "virtual-size": 268435456,
          "filename": "/var/lib/one//datastores/0/36/disk.0",
          "cluster-size": 65536,
          "format": "qcow2",
          "actual-size": 4268032,
          "format-specific": {
            "type": "qcow2",
            "data": {
              "compat": "1.1",
              "compression-type": "zlib",
              "lazy-refcounts": false,
              "refcount-bits": 16,
              "corrupt": false,
              "extended-l2": false
            }
          },
          "full-backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
          "backing-filename": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
          "dirty-flag": false
        },
        "iops_wr": 0,
        "ro": false,
        "node-name": "libvirt-2-format",
        "backing_file_depth": 1,
        "drv": "qcow2",
        "iops": 0,
        "bps_wr": 0,
        "write_threshold": 0,
        "backing_file": "/var/lib/one/datastores/1/6368bc925d188f1dc0e711584250cc50",
        "dirty-bitmaps": [
          {
            "name": "one-36-0",
            "recording": true,
            "persistent": true,
            "busy": false,
            "granularity": 65536,
            "count": 458752
          }
        ],
        "encrypted": false,
        "bps": 0,
        "bps_rd": 0,
        "cache": {
          "no-flush": false,
          "direct": true,
          "writeback": true
        },
        "file": "/var/lib/one//datastores/0/36/disk.0"
      },
      "qdev": "/machine/peripheral/virtio-disk0/virtio-backend",
      "type": "unknown"
    },
    {
      "io-status": "ok",
      "device": "",
      "locked": true,
      "removable": true,
      "inserted": {
        "iops_rd": 0,
        "detect_zeroes": "off",
        "image": {
          "virtual-size": 372736,
          "filename": "/var/lib/one//datastores/0/36/disk.1",
          "format": "raw",
          "actual-size": 372736,
          "dirty-flag": false
        },
        "iops_wr": 0,
        "ro": true,
        "node-name": "libvirt-1-format",
        "backing_file_depth": 0,
        "drv": "raw",
        "iops": 0,
        "bps_wr": 0,
        "write_threshold": 0,
        "encrypted": false,
        "bps": 0,
        "bps_rd": 0,
        "cache": {
          "no-flush": false,
          "direct": false,
          "writeback": true
        },
        "file": "/var/lib/one//datastores/0/36/disk.1"
      },
      "qdev": "ide0-0-0",
      "tray_open": false,
      "type": "unknown"
    }
  ],
  "id": "libvirt-1316"
}

1.16. ACTION#16: Select a backup and restore it

This time, we will select the backup 84 and INC 1:

oneimage restore -d 1 --name test-img84-inc1 --no_ip --no_nic --increment 1 84

Create a new VM from the new template test-img84-inc1, which will deploy the new VM using the restored backup image test-img84-inc1-disk-0.

On this new VM if you run ls -l, it should be all created files up to test-img84-disk.0.1.

nachowork90 commented 1 year ago

As you have seen bitmaps are synchronized into the qcow2 when the VM is poweroff. While it is running they are kept in libvirt. The problem in this situation is that when the VM is resumed the bitmaps are not redefined by libvirt and thus they are not cleaned in the cleanup phase of the reset opereation.

This commits solve the issue 7471016

You can easily take the new file: https://github.com/OpenNebula/one/blob/master/src/tm_mad/lib/backup_qcow2.rb

This new commit fix completely the Backup process! As @Franco-Sparrow i have done a ton of test and always the checkpoints, bitmaps or dirty ones get in sync!

kCyborg commented 1 year ago

@Franco-Sparrow pretty awesome job!!! I have reviewed some of your issues and you have helped me to understand some technical parts of new Backup solution's insides. Thanks!

rsmontero commented 1 year ago

Closing this one. Once again thank you @Franco-Sparrow for that awesome feedback :smile: :heart: