nassim-git / project-voldemort

Automatically exported from code.google.com/p/project-voldemort
Apache License 2.0
0 stars 0 forks source link

Make statistics be sliding window based instead of being reset every X seconds #221

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
See http://groups.google.com/group/project-
voldemort/browse_thread/thread/48ec66f33642552d/6772c51140b43a19 for a brief 
discussion on this.

For our usage having statistics be sliding window based would make much more 
sense to administrators then how statistics are implemented currently. I'm 
willing to take the time to implement this if there is enough agreement that 
a sliding window based solution would be a better choice.

Original issue reported on code.google.com by bruce.ri...@gmail.com on 22 Feb 2010 at 7:40

GoogleCodeExporter commented 9 years ago
Hey Bruce, I don't think anyone likes the current behavior. The only concern 
would be
the performance of the window approach. For example if I have 10k request/sec 
for a
30 second window that is 300k measurements I need to sum over to compute the 
average
time. Any solution that makes the stats more accurate and doesn't introduce a 
ton of
overhead would be very welcome.

Original comment by jay.kr...@gmail.com on 23 Feb 2010 at 5:26

GoogleCodeExporter commented 9 years ago
That's my concern as well. Considering that our use case likely has the 
absolute most 
ops/sec you'll see on a voldemort server (using just the cache store) I'll get 
the 
overhead down as low as I can and do some load tests to see how it performs 
under 
stress.

Original comment by bruce.ri...@gmail.com on 23 Feb 2010 at 2:34

GoogleCodeExporter commented 9 years ago
Awesome. Yes, I mean monitoring is worth some cost so it need not be free, but 
it
should be possible to do a once a minute ping on all the stores and the rollup
numbers by a monitoring tool (ganglia or zenoss or whatever) without DOSing the 
server.

Original comment by jay.kr...@gmail.com on 23 Feb 2010 at 7:59

GoogleCodeExporter commented 9 years ago
I've implements this about 3-4 different ways so far and the best I've managed 
is

26.5k ops/sec with stats commented out on client and server
26.3k ops/sec with old stats on client and server
25.1k ops/sec with stats off on client + new stats on server
23.3k ops/sec with new stats on client and server

Which means that there is a 12% maximum overhead compared to the existing stats 
implementation under max 
load. My client is 20 threads on a quad-core server and the machine is maxed 
out if not slightly 
overloaded. Under light to moderate load I suspect the overhead wouldn't be 
enough to measure.

From what I can tell the full cost of that overhead is all in 
ConcurrentLinkedQueue.offer(x). I've tried 
using other data structures and even doing the add operation via an executor 
with the same or worse 
results.

Impl attached.

Original comment by bruce.ri...@gmail.com on 24 Feb 2010 at 5:17

Attachments:

GoogleCodeExporter commented 9 years ago
Hmm that is not bad, but 3k ops per second is not nothing. Would it make sense 
to
maybe degrade the accuracy of the window to speed up the performance of the
measurement. What I am thinking is two long[] arrays of fixed size and use a
AtomicInteger to get the index into the array mod the length. One array would 
store
the measurement and one would store the time of the measurement. This would 
give a
rolling buffer of fixed size. One would search the array to find the proper 
time to
start then sum over the correct interval when computing a stat. It is possible 
that
under heavy load the buffer would be smaller than the number of requests in the
window, and we would degrade to the last N requests since we would cycle the 
buffer,
but that would still probably be totally stable and acceptable since the 
measurement
would be over a large number of requests. In fact this might be preferable 
because it
means the calls to get the stats would have constant time and not become 
increasingly
computationally intensive as the load went up.

This should be fairly fast because it requires no synchronization and computing 
the
stats are a linear scan over sequential memory.

Original comment by jay.kr...@gmail.com on 24 Feb 2010 at 4:20

GoogleCodeExporter commented 9 years ago
Interesting idea. I had at one point added a limit to the size of the queue but 
it 
didn't seem to affect performance at all (though it would benefit memory). I'll 
give 
this idea a shot.

Original comment by bruce.ri...@gmail.com on 24 Feb 2010 at 5:37

GoogleCodeExporter commented 9 years ago
Nice. With your method we're now tracking up to 100k operations and seeing 
25.7k - 
25.9k ops/sec with new stats on client and server. I'd say that's quite 
acceptable.

Original comment by bruce.ri...@gmail.com on 24 Feb 2010 at 6:39

Attachments:

GoogleCodeExporter commented 9 years ago
Committed to git hub @ 
http://github.com/Omega1/voldemort/commit/403999fa0d18e19cdbaf7cf6cdc4fa4bc25acf
bc

Original comment by bruce.ri...@gmail.com on 24 Feb 2010 at 7:00

GoogleCodeExporter commented 9 years ago
Hit a bug with this change where tracking 100k operations was causing OOM's on 
our 
clients. I turned down the max number of operations to 10k and changed the 
AbstractStoreClientFactory to not initialize stats unless JMX is enabled (which 
it is 
by default). I'm not sure why jmx is enabled for clients (seems a tad odd to 
me) but if 
we stick with this we might need two modes - one for clients that uses the old 
mechanism and one for the servers that do track operations.

Original comment by bruce.ri...@gmail.com on 25 Feb 2010 at 6:43

GoogleCodeExporter commented 9 years ago
Rationale for client jmx is that clients typically care more about how long you 
took
to get an answer to them rather than how long you took to get an answer to the 
socket.

Original comment by jay.kr...@gmail.com on 26 Feb 2010 at 2:42

GoogleCodeExporter commented 9 years ago
Very nice. This should allow us to do arbitrary percentiles as well, which are
problematic to do otherwise.

I will get this merged in. Any tests you wrote to go along with it?

Original comment by jay.kr...@gmail.com on 26 Feb 2010 at 3:45

GoogleCodeExporter commented 9 years ago
I'm a tad ashamed to say no, not automated tests to go with this. My testing 
was 
basically throwing a ton of requests at a server while verifying that the 
numbers 
returned from jmx were fairly equivalent to what I was seeing on the client 
side using 
my tool (differences mostly were around the fact that the server received more 
get 
requests then the client submitted because deletes do a get first to retrieve 
the 
version info).

I'll see about writing a test case today to cover the basics.

Original comment by bruce.ri...@gmail.com on 26 Feb 2010 at 4:24

GoogleCodeExporter commented 9 years ago
Test case added - 
http://github.com/Omega1/voldemort/commit/d50f0a6f3490072d3b94f7c2ce20ff0cc46786
8f

Original comment by bruce.ri...@gmail.com on 26 Feb 2010 at 5:44

GoogleCodeExporter commented 9 years ago
Rocking, I will get this merged. I want this too :-)

Original comment by jay.kr...@gmail.com on 26 Feb 2010 at 11:49

GoogleCodeExporter commented 9 years ago

Original comment by jay.kr...@gmail.com on 2 Mar 2010 at 2:45

GoogleCodeExporter commented 9 years ago
More updates on this. I was still seeing OOM's with this even when tracking 10k 
ops 
given enough stores so I made the following changes:

1. Only track 1000 ops
2. Lazily create the arrays as many Tracked instance just are not used for a 
particular 
store.

http://github.com/Omega1/voldemort/commit/32789fbcb22e009107658cc8cf215ffd59c634
b0

Original comment by bruce.ri...@gmail.com on 3 Mar 2010 at 9:21

GoogleCodeExporter commented 9 years ago
Jay, are we ready to merge this in? 

Original comment by feinb...@gmail.com on 17 Mar 2010 at 1:47

GoogleCodeExporter commented 9 years ago
branch for this issue created at 
http://github.com/Omega1/voldemort/tree/issue221

Original comment by bruce.ri...@gmail.com on 19 Mar 2010 at 10:54

GoogleCodeExporter commented 9 years ago
We just encountered a bug with the code linked above, fix is @ 
http://github.com/Omega1/voldemort/commit/bb70317cc615d3ac1eb3257fcefade523a999e
87

The fix is simple, I'll see about getting it merged into the issue-221 branch 
this week.

Original comment by bruce.ri...@gmail.com on 28 Sep 2010 at 9:02