Open dmart28 opened 8 years ago
fast-cast tries to optimize high frequency traffic by packing several messages into a single packet (without adding another thread/queueing). The intended behaviour is:
If you can provide a reproducing testcase (e.g. with messages sending a timestamp) I'll fix that.
Forgot an edge case
Yes, so it explains something. It seems I really faced edging condition. Appreciate for your explanations - though you can start doing some Wiki here on top of that questions :)
it should only delay for a millisecond or so, do you observe larger delays ?
Haven't observed yet. How ppsWindow actually calculated now? I noticed it is marked as Deprecated.
oops, I forgot about that :). Has been a while i was into fast-cast ..
Ahh . got it again .. I improved that:
btw: are you using fast-cast inside your "reveno" framework ? Looks interesting .. I'll provide first class support on fast-cast, as I don't know of a production-grade system using it yet (my major client seems to stick on fastcast 1 forever, no problem => no upgrade ;) ).
Be aware that multicast does not work on public cloud platforms, though.
Yes, I am using it here as one of the options for failover. I liked mostly fast-cast for things like zero-copy, latency in mind. I haven't found another reliable zc, low-lat (with really all that things in box) multicast framework, to my big surprise. Also I am aware of multicast limitations on some environments, etc., so trying to provide multiple options in Reveno, which started recently more as experiment, now had only few releases, so can't say it's very production-grade for now :) Thanks for the support thing!
There is also Aeron, which is probably more advanced than fast-cast, in turn fast-cast is simpler. I compared fast-cast with (an early version of) aeron, and found aeron is slightly faster in the average case (~3-7%), while fast-cast had fewer outliers in the 99.x percentiles. Issue is, that in practice most performance is lost in processing/decoding, so benchmarking pure byte array transport is only a small part of the story. I also used JGroups, which has the advantage of being able to transparently switch to TCP, however the reliable mutlicast implementation is abysmal. I wrote fast-cast because JGroups just was not good enough to support our project. It also does not scale well (>50 nodes clusters), and jgroups tcp stack does not scale well as tcp does not scale :).
Hello,
Having question: say I do some number of "publisher.offer(..)" calls. After that, I just call "publisher.flush()". How much guarantee is that everything offered before "flush" call was really sent down to socket?
Just it seems it's not the case that after flush everything gets written down to channel now from my tests. Thanks in advance.