rajivnalawade / openpgm

Automatically exported from code.google.com/p/openpgm
0 stars 0 forks source link

Issue with rate-limiting engine #28

Open GoogleCodeExporter opened 8 years ago

GoogleCodeExporter commented 8 years ago
Have a set up where I'm using OpenPGM on the server-side, MS-PGM on the 
client-side trying to multicast a large file.  What I'm running into is an odd 
performance problem when trying to increase the speed.

For example (on the sender):
PGM_MTU: 7500
PGM_TXW_MAX_RTE: 7,000,000

Operates as I would expect, with an approx. speed of ~7000KB/sec

However, when I change PGM_TXW_MAX_RTE to 8,000,000, the rate drops to the 
floor.  It's not the repair cycle or anything, it's the rate of packets coming 
from the sender.  If I increase the PGM_MTU value to 9000, then performance 
picks back up.

Looking at the code, I think the problem is in the calculations done during 
setup in the rate engine.  Specifically, looking at pgm_rate_create in 
rate_control.c, I see:

    if ((rate_per_sec / 1000) >= max_tpdu) {
        bucket->rate_per_msec   = bucket->rate_per_sec / 1000;
        bucket->rate_limit  = bucket->rate_per_msec;
    } else {
        bucket->rate_limit  = bucket->rate_per_sec;
    }

My first impression is that bucket->rate_limit is being set wrong, shouldn't 
that be bucket->rate_limit = bucket->rate_per_sec?

The basic work flow for the sending loop (non-blocking) is to send, check the 
status code, and in the case of PGM_IO_STATUS_RATE_LIMITED, use pgm_getsockopt 
to fetch PGM_RATE_REMAIN.  Once PGM_TXW_MAX_RTE/1000 is >= PGM_MTU, the values 
returned skyrocket, causing the sender to slow down greatly.

So I'm trying to figure out if this is a.) a mistake on our part with the 
program flow, or b.) a bug in OpenPGM.

Additional bit:
- server is running on FreeBSD 7.0
- we're using libpgm-5.2.122
- client is running on Windows 7 using MS-PGM

Thanks,

Jon

Original issue reported on code.google.com by jonengl...@gmail.com on 22 May 2013 at 6:31

GoogleCodeExporter commented 8 years ago
Due to the large MTU size you are being hit by the coarse conversion to 
millisecond granularity.

Now the question arises what is the best way forward.  It would appear possible 
to tweak the calculation to defer the change to milliseconds to a multiple or 
larger ratio of the MTU size.

Original comment by fnjo...@gmail.com on 22 May 2013 at 7:58