Closed klsx0 closed 11 months ago
I believe that the dd
is single-threaded, so depending on how fast your storage is you might be CPU-bound. Unlikely given the speeds you're seeing, but just something I wanted to mention.
What is the speed of the storage without DRBD? What is the speed of the network between the host?
My best guess here is that you're simply filling up the send/rcv buffers on the network. While protocol A is asynchronous if the TCP buffers get full, things will run at speeds similar to synchronous replication. We need to wait for the peer to ack the packet before we can clear it from our send buffer and add another write. Because of this, while asynchronous, we still don't expect performance much beyond what the network is capable of when trying to replicate a large volume of writes. We created DRBD-Proxy to work around this by providing gigantic buffers/cache, but even then when those buffers fill, things reduce to network speeds.
Regarding protocol B, it was developed more for academic purposes than anything. It doesn't have a real-world use case, and we don't suggest its use.
Hello,
The speed of my storage without drbd is about 220MB/s. My network is approximately 25MB/s and with drbd on, it's about 23MB/, so we are nearing the network limit.
Regarding Drbd-proxy all of the links on this page redirected to linstor-server repo. Do we need a special license for this product or is it now a linstor-server component?
I will try to increase TCP buffers with mode A like you said.
Thank for you response,
The "star us on GitHub" link you followed is on many pages of the sites and is just a generic request for people to star the project on GitHub for some internet points. The only other really pertinent link there is the Disaster-Recovery link as that's the common use-case for DRBD-Proxy.
You will not find DRBD-Proxy on GitHub as it is a licensed product. You must contact sales regarding an eval license to try it out.
Please try experimenting with increasing the TCP buffers, but just know that it will only protect against short bursts of writes. With disks capable of 220MiB/s and a network of 25MiB/s a stream of constant writes will eventually fill the buffer no matter the size. Perhaps your application write workloads are only "bursty" though.
I tried drbd by increasing the threshold of the TCP buffers using mode A, I got 32 MB/S but no better. In my case I have a lot of IO due to a database and this really slows down the system.
Thank you, I will contact the sales team for a trial version of Drbd proxy.
Hello,
I use Drbd 9.2.6 with drbdadm 9.26.0 to replicate 1 volume on remote host. In my case I am limited by a network with a bandwidth of 220Mb/s.
Using drbd with protocol C is too restrictive for me, as it will reduce the speed of the replicated disk and a database running PostgresQL on it. So I wanted to test performance by changing protocol B or A.
The results are exactly the same between the 3 protocols. To test disk performance, I used the commands in the following documentation
/etc/drbd.d/r1.res
Of course, I tried to change the place of the
but with no noticeable effect.
Of course the /etc/drbd.d/global_common.conf file is left by default.
I also tried to change the version: 9.2.6 -> 2.2.5 -> 2.1.17 But without success. Here is my system information debian 11:
Linux dev-machine-1 5.10.0-26-amd64 #1 SMP Debian 5.10.197-1 (2023-09-29) x86_64 GNU/Linux
All drbd are compiled from tar.gz sourcesThank you very much for your time Sincerely, Kylian,