Closed tmakatos closed 7 years ago
Can you try the newer dmdedup for 4.18 kernel: it has some fixes and enhancements (and it’s going to be the main branch we’re planning to support).
Thanks, Erez.
For deduplication, there are extra components in the write path when compared to normal writes: 1) calculate the checksum for the block 2) check whether the checksum exists in the metadata 3) Decide to share / write new block. Since your workflow has only 30% deduplication - we are going to end up writing both metadata and data device for the remaining 70 % writes
Provided the fact that you are using the same physical device for storing both meta-data and data, the high latency is expected.
We suggest using different SSDs for storing metadata device(to verify if it significantly affects the performance).
It would also be helpful if you can provide the stats for data with different duplicate %. (For example 90 , 70, 50, 0).
Also please use the new kernel version. You can find the source here https://github.com/dmdedup/dmdedup4.8
Thanks.
tested on dmdedup4.8 as suggested, issue still persists, please see https://github.com/dmdedup/dmdedup4.8/issues/3
I am evaluating dm-dedup on an NVMe device (on top of LVM) on kernel 3.18.25-18.el6.x86_64 (I had to fix a compilation error regarding submitting bios). Both metadata and data devices are logical volumes on the same NVMe device. I create the target as follows:
Where
${TARGET_SIZE}
is 150% of the size of${DATA_DEV}
. I then populate the first 4 GB of themydedup
target as follows:And then do a short random write test as follows:
I get 2.8K IOPS while writing directly to
${DATA_DEV}
achieves more than 42K IOPS. In the dm-dedup case CPU is only slightly used (15%) and the NVMe device is about 90% utilised.Output of
dmsetup status mydedup
after the random write test has finished:Is this performance expected?