the8472 / mldht

Bittorrent Mainline DHT implementation in java
Mozilla Public License 2.0
147 stars 45 forks source link

Accelerate the process of writing conclusion of the last "packet" on the Server class #18

Closed ghost closed 6 years ago

ghost commented 6 years ago

Accelerate the process of writing conclusion of the last "packet" on the Server class. I have noticed the last "packet" as the "exit" action of a CLI command. But I really can't understand how, exclusively, only the last "packet", monstruosly consumes a lot more time if you look the others. I don't know if it's really is a issue, or I lost something while debugging the project. I'm still debugging the code, looking for something that is delaying the write execution of the last two ByteBuffer's (in this case), but until now, I have no idea what's happening under the hood. I know it looks like a question. But I think that the GitHub, in this case, is the best way to solve, or try to solve this "issue". Any help is welcome.

the8472 commented 6 years ago

Probably nagle's algorithm at work. I'll replace the write calls with the gathering IO ones.

the8472 commented 6 years ago

does f8818109bc35a87b743f72a75080c162f44a59b8 fix it for you?

ghost commented 6 years ago

Unfortunately no, the excessive delay persists. It's curious that the delay is restricted to the last two instances of ByteBuffer (header and payload), which would be the command finalization "packet". Below, some points inserteds for me (client side) followed by their time partials (two last "packets").

`[NEW "PACKET"] header read: 0ms Header: java.nio.HeapByteBuffer[pos=4 lim=4 cap=4] Message read: 0ms Message: java.nio.HeapByteBuffer[pos=67 lim=67 cap=67] Map decode: 0ms

[NEW "PACKET"] header read: 20334ms Header: java.nio.HeapByteBuffer[pos=4 lim=4 cap=4] Message read: 0ms Message: java.nio.HeapByteBuffer[pos=29 lim=29 cap=29] Map decode: 1ms`

the8472 commented 6 years ago

20s? That does not look like a network issue. Have you measured on the sender side? And which command are you measuring specifically? I suspect that doesn't have anything to do with the client-server model and that it's just timeouts that the tasks are waiting for. You should take a look at the logs too, maybe bump the log level to debug.

ghost commented 6 years ago

I made time measures on the sender side, below, some points inserted by me (in this time, on the server side (Server class)):

`(...)

3592 bytes written in 1 ms

33 bytes written in 0 ms <--last `

I'm making the time measures on the execution of the "GETPEERS" command without the "-fast" parameter (could be this the problem?). I took a look on the method isDone of the class PeerLookupTask, and I think that it was exactly what I lost during the debugging.

the8472 commented 6 years ago

I'm making the time measures on the execution of the "GETPEERS" command without the "-fast" parameter (could be this the problem?).

That's working as intended then. non-fast mode waits out all hard timeouts for sent requests, which is 10 seconds , with -fast it terminates based on adaptive soft timeouts instead.

slow mode tries hard to get all the data it can find fast mode is less accurate but as the name says also faster

If you want more fine-grained control you should use it directly as a library instead of talking over the remote-CLI interface.

ghost commented 6 years ago

If you want more fine-grained control you should use it directly as a library instead of talking over the remote-CLI interface.

Ok, I will do so as soon as possible. Also, congratulations for your work and thank you for the help. See ya!