Closed Marshalzxy closed 2 years ago
Overlaybd switched twice between user/kernel space for each I/O request which causes a tiny performance loss exactly. However, I think the critical impact of I/O performance is the implementation of filesystem (overlayfs on ext4 vs. ext4 on overlaybd). There are some obvious advantages:
For more I/O benchmark between overlaybd and overlayfs/devmapper, please see our paper
Overlaybd switched twice between user/kernel space for each I/O request which causes a tiny performance loss exactly. However, I think the critical impact of I/O performance is the implementation of filesystem (overlayfs on ext4 vs. ext4 on overlaybd). There are some obvious advantages:
- The read performance of overlayfs degrades as the number of layers increases but overlaybd doesn't.
- Using the native writable layer of overlaybd implemented sector-level data modification without any copy on write data. On contrast, overlayfs has a terrible write performance because of 'copy-up'
For more I/O benchmark between overlaybd and overlayfs/devmapper, please see our paper
@Marshalzxy let me add, overlaybds load all indexes when starts. On the other hand, there is no index when overlayfs starts, so files that are not indexed/cached need to be searched layer by layer.
let me add, overlaybds load all indexes when starts. On the other hand, there is no index when overlayfs starts, so files that are not indexed/cached need to be searched layer by layer.that are not indexed/cached need to be searched layer by layer.
Once overlayfs targets the file in the layer, will subsequent IO read be faster than overlaybd? Or overlaybd loads index in kernel
,and IO read will target directly to backend file system.
Once overlayfs opens the file, subsequent IO read might be faster than overlaybd. We haven't done such micro benchmarks. But overlaybd also has an all-in-kernel implementation that performs much better than overlayfs, even for opened files. We will propose the module to upstream later.
In my opinion Overlaybd is another lazy-pulling container image snapshotter for containerd.It's based on block device and iscsi target driver.It will redirect IO from kernel virtual blocks to user mode overlaybd backend, finally resend to kernel local file system.I thinks overlaybd has longer IO path than overlayfs, for it will switch twice between user mode and kernal mode when container read a image file( no in cache), while overlayfs only switch once. Theoretically if container images are already downloaded, container read file IO would be slower in overlaybd than in overlayfs.