Open GoogleCodeExporter opened 8 years ago
It's log file of sender.
Original comment by stepinto...@gmail.com
on 26 Apr 2012 at 3:22
Attachments:
It is by design that repairs are deferred. There are two leads, one is that
for some configurations it is required that original data continues on in the
presence of significant failures, second is that with jumbograms repairs become
very expensive and cause notable impact to original data delivery.
At high message rates everything tends to fail, the conclusion is the
application has to provide some form of coarse grained throttling to permit the
environment resources for repair time and transmission.
Note that the current configuration of PGM with 0MQ provides low latency
delivery and TCP fairness, it is not hard tuned to maximum throughput.
I'm working with another multicast transport on Windows that yields
significantly higher throughput performance via pushing very large
multi-fragment packets but at a very low rate. So PGM manages ~14,000 packets
per second at say ~80mb/s but protocol X manages ~700mb/s with only ~1,000
packets per second.
TODO: inspect log.
--
Steve-o
Original comment by fnjo...@gmail.com
on 26 Apr 2012 at 2:02
Hi Steve, thanks a lot for your reply!
Furtherly, to "provide some form of coarse grained throttling to permit the
environment resources for repair time and transmission", do you think it's a
good idea in the situation of lots of receivers runs on same machine that using
a daemon like tibrv-rvd, which receive packet by openPGM and feed subscribers
with IPC, because I think muliti-receiver will incur heavy IO/CPU overhead
which cause data loss, and by my test, unrecoverable data loss never happened
in single receiver mode.
Another question is, refer to RFC3208, receivers may optionally multicast a NAK
with TTL of 1 to the local group for data packets was missing, if sender is
busy with ODATA/RDATA/NCF, perhaps receivers multicast NAK can share some
burden of NAK suppression with sender.
Attachment is log of reciever1 (there are 10 receivers).
Original comment by stepinto...@gmail.com
on 27 Apr 2012 at 11:51
Attachments:
The TIBCO Rvd or ciServer, etc, approach is great for propagation of client
disconnects, it can also be efficient for packet fan-out with an appropriate
high speed user-space IPC method. The trade off is the cost of one application
running socket calls and receiving kernel switch overheads compared with every
application being hit by kernel switching. For many scenarios is it quite
surprising how minimal the difference is.
There is to note a limitation with the PGM protocol that whilst you can have
multiple receivers on one host you cannot have out-of-the-box senders working
in the same configuration. This is when a broker application is necessary to
manage incoming NAK requests.
Note the multicast NAKs are for routed environments to accelerate NAK
supression. Within a LAN segment the multicast of NCFs by the original sender
performs the same role. This is another reason for the deferred RDATA
transmission. When a NAK is received a NCF is immediately returned and the
RDATA is queued, this enables the network to perform NAK supression and the
receiver to perform NAK elimination to improve the value of the subsequent
RDATA broadcast.
--
Steve-o
Original comment by fnjo...@gmail.com
on 27 Apr 2012 at 1:52
Original issue reported on code.google.com by
stepinto...@gmail.com
on 26 Apr 2012 at 3:20