Closed nirs closed 2 years ago
I tested this change with qcow2 file on block storage:
diff --git a/ovirt_imageio/_internal/io.py b/ovirt_imageio/_internal/io.py
index 93aa758..0ba285b 100644
--- a/ovirt_imageio/_internal/io.py
+++ b/ovirt_imageio/_internal/io.py
@@ -21,7 +21,7 @@ from . backends import Wrapper
# Limit maximum zero and copy size to spread the workload better to multiple
# workers and ensure frequent progress updates when handling large extents.
-MAX_ZERO_SIZE = 128 * 1024**2
+MAX_ZERO_SIZE = 1 * 1024**3
MAX_COPY_SIZE = 128 * 1024**2
# NBD hard limit.
This does not make significant difference. Testing qemu-img convert show similar slowdown, proving that the issue is in qemu qcow2 driver. Looks like the drive is calling fdatasync() when sending NBD_WRITE_ZEROES command.
Regardless of the qemu issue, there is no reason to zero a new qcow2 image. We could get the destination image extents, find that it is already zeroed, and skip all zero calls. This works only if we know that the qcow2 image does not have a backing file. The NBD protocol does not expose such info but we can add more details in the ticket.
Uploading 8T fedora image containing 1.5G of data takes 5.5 minutes. It should be much faster.
Client log
Server log
Connection stats
zero takes most of the time - 16k requests, 261 seconds write.write looks very slow - 11.91 MiB/s
Fix