Closed jrtcppv closed 1 year ago
Unfortunately this breaks things even if I make modest changes to the block size
Do you have any insight into exactly what's breaking or how?
I haven't looked closely enough to be absolutely certain it's related, but I do see a very similar-looking nbdBufferSize
in bmcweb that might need a corresponding tweak (looks like you might need to set it to whatever nbd-proxy's buffer size is plus 16): https://github.com/openbmc/bmcweb/blob/818db200c0e651896d5dddd081d56b180a4b9314/include/nbd_proxy.hpp#L37
As for whether it will improve performance significantly, it might depend on the round-trip network latency you're working with -- on a higher-latency link I'd expect a larger relative improvement from increasing the buffer size, but on a low-latency local network I'd guess the benefit will probably be smaller. It might also depend on the block-level I/O requests issued by the host OS, though I'm not familiar offhand with exactly how getting filtered through nbd/nbd-proxy affects that.
Hi thank you for the quick response. The error varies depending on what I set the block size to, but they are all I/O errors when I attempt to read from the mounted block device. Here is a sample of the errors when running cp
on some files using a buffer size of 0x40000
:
cp: error reading '/media/b3/610080-camera_2-20220829-184824Z.mp4': Input/output error
cp: error reading '/media/b3/610080-camera_2-20220829-184824Z.srt': Input/output error
cp: error reading '/media/b3/desktop.ini': Input/output error
Sat, 23 Sep 2023 22:16:44 +0000 | ACCESS | session | url:'http://jon.tator.dev/bespoke/ws/3254/' id:'1695507341838702109' remote:'10.1.29.161' command:'/usr/local/sbin/nbd-proxy' origin:'https://jon.tator.dev' pid:'80' | DISCONNECT
cp: error reading '/media/b3/locations.log': Input/output error
cp: cannot access '/media/b3/lost+found': Input/output error
I am actually unfamiliar with the bmcweb
repository you linked, I am only using the contents of this repository. I am running nbd-proxy
with websocketd
in a cloud environment, and running nbd.js
on a browser locally, so each chunk is traveling over the internet and the round trip latency is significant. We see significant improvements in upload speeds when using larger body sizes in HTTP requests, so I thought that may help here, I just can't seem to get it to work. I have been digging around the nbd repo and see quite a few constants, but the only ones that line up with 0x20000
(here and here) are not in the nbd-client code as far as I can tell.
The NBD proxy treats the nbd socket as a stream and doesn't try to match any nbd protocol structure with the the websocket messages. The kernel may request an almost unlimited read ahead and the overhead doesn't work out to message sizes.
That said the buffering was changed in bmcweb and may result in copying partial content to the beginning of the buffer as processed. I haven't looked at how tls max crypto might factor in to the communication or additional copies. There should be nothing preventing the n+1 packet being encoded when the prior packet is in flight but once a packet is encoded it has to flow before another request to my understanding.
You might want to try limiting read ahead instead of trying to increase buffer sizes
Thank you for the advice, I tried the following:
blockdev --setra 16384 /dev/nbd0
. dd
rather than cp
so I could control the block size of the copy on the block device, with dd if=/dev/nbd0 of=/tmp/test.img bs=4M
. nbd-proxy.c
to 0x80000. I believe the errors I saw before were due to a networking issue, so it does work with a larger buffer size.Unfortunately none of this had any effect, and the JS server still seems to be sending 128KiB chunks.
I was able to implement a buffering scheme around the WebSocket, so I now wait until I get 1MB of data before sending it off with ws.send
. This seems to have increased our upload speeds significantly. If there is any interest I'd be happy to open a PR, even if you decide not to merge you could take a look if someone has a similar use case.
Either way thank you both for all the help!
I am trying to upload some large files using this library, and it works well but the speed is fairly slow compared to our bandwidth. I noticed that each
ws.send
is only sending 128KiB, so I attempted to increase the buffer size innbd-proxy.c
at this line. Unfortunately this breaks things even if I make modest changes to the block size, so wondering if I need to change something elsewhere? Do I need to compile a customnbd-client
? Do you think this will even improve speeds? Any help would be greatly appreciated.Thanks, this software has been really helpful for me!