Closed allenjhan closed 6 years ago
I have read some discussions online about O_DIRECT, and I am not sure what is happening. After more measurements, I am seeing that dm-writeboost has no effect on the inherent speed of the backing device when O_DIRECT is used. After running for a few minutes at the speed of the backing device, the cache slows down, just like it would if the writeback daemon turns on when urge_writeback is true. The cache statistics are updated, just like it would be if the cache worked normally.
Discussion online seems to indicate that O_DIRECT bypasses the kernel and causes IO to happen via DMA. If true, that would explain why dm-writeboost has no change with increasing cache size: the cache is not being used. However, this is contradicted by the fact that the cache statistics are being updated normally. I am still not sure how to interpret my results, but my solution is to stop using the O_DIRECT flag for my measurements. I will share the results of my measurements if they make sense.
@allenjhan Thank you for evaluating Writeboost. But, this is not a school for you. I will close this issue.
One comment: Your understanding of O_DIRECT and how Writeboost works when it's enabled is wrong.
@allenjhan - reviving this old thread for a minute ... I am seeing a similar issue and wondering what your conclusion with this was?
Hi,
I am trying to evaluate the performance of dm-writeboost for a school project. Basically, I am trying to show the performance of dm-writeboost is better (more speed) when the size of the cache increases. I am manually setting the wb->nr_segments variable to do this.
I have noticed that the speed of the cache slows down when urge_writeback becomes true. (That is what I think is happening.) I have nr_batch set to 4, which may be why it is slow. When urge_writeback becomes true, dm-writeboost becomes slower than the backing device. Therefore, I have tried to evaluate dm-writeboost so that the entire workload fits in the cache device. That way, the cache does not need for the writeback daemon to finish writeback in the middle of the workload.
My test setup is as follows. I use Postmark, but modified so that the open() system call is used with the O_DIRECT and O_SYNC option. I choose the buffering setting to be false. I also deactivated the write cache of my hard drive with the hdparm -W 0 option. This guarantees that the only cache being evaluated is dm-writeboost. There should be no interference from another cache. I first allow the system to run a workload for about five minutes, which fills the cache device. This gives the best chance for hits when I run another workload. I then make the cache writeback all dirty caches to the backing device with drop_caches, to make sure writeback will not interrupt the next workload. I then run a smaller workload that fits completely in the cache device and observe the speed and hit rate.
I have two questions: 1) I have a strange result I cannot explain. dm-writeboost speeds up the backing device by about 15%. However, it reaches 15% speed increase at cache size that gives 20% hit rate. From 20% hit rate to 100% hit rate, there is no further improvement in the speed of dm-writeboost. This is strange since when the cache has a high hit rate, the backing device is not accessed, and so dm-writeboost should run at the speed of the cache device. There should be tremendous speed improvement when the backing device is not accessed. For example, even when I observe 100% hit rate, I only see the 15% speed improvement.
2) I have used O_DIRECT | OSYNC to turn off kernel caching and used hdparm to turn off the hard drive's write cache. However, the speed of the hard drive (without dm-writeboost) is still around 30 MB/s. If I include the -o sync option during mount, the speed of the HDD drops to 10 KB/s. That is a huge difference. Is there another cache in the system I am not aware of? Besides the hard drive write cache and kernel caching, is there another source for caching?
I understand if you are busy and do not have time to consider my question for very long. If you even read this far, I am already grateful.