yahoo / streaming-benchmarks

Benchmarks for Low Latency (Streaming) solutions including Apache Storm, Apache Spark, Apache Flink, ...
Apache License 2.0
634 stars 298 forks source link

How to use seen.txt and updated.txt to generate graphs for comparison #22

Open HassebJ opened 8 years ago

HassebJ commented 8 years ago

@rmetzger I bumped the versions for Spark, Storm and Flink (also had to modify some pom files) to successfully run benchmarks for all of them but I am having a hard time making sense of the data produced by the benchmarks. Can you please give me some pointers as to how I can use the seen.txt and updated.txt data to generate graphs for the latency characteristics of the frameworks as shown in this post.

revans2 commented 8 years ago

Yes we are all having the same issue. There appears to be something odd happening with scala 2.11. If you upgarade all of the versions in the pom.xml, not just the shell script, and run mvn dependency:tree you can see that most things were upgraded to 2.11, but there are some dependencies still stuck on 2.10. The flink kafka integration is an example of this. Looking around I was able to find a 2.11 version, but it looks like it is for a much newer version of flink. Then I ran out of time.

If you want to try and play around with upgrading the version of flink to 1.x and along with it hopefully the kafka version. Then look at the spark side as well that would be great.

In the mean time I suggest you try to find/build the older releases of flink and spark then place them in the download cache instead.

HassebJ commented 8 years ago

@revans2 I have updated Spark to 1.6 and Flink to 1.1 in this PR. I am now able to run the benchmarks successfully on a multi-node cluster as well as locally.

Can you please help me make sense of seen.txt and updated.txt data ? So we have the following data in these files

seen updated
23 6020
32 9996
29 10962
42 10926
44 9879
33 9842

What is the relation between the seen and updated values here? Specifically, how can I use this data to generate a latency graph, like the one below (source). Would really apprecite if you could help me with this.

tumblr_inline_nzet5r68um1spdvr2_540

mmozum commented 8 years ago

Ditto, it would be great to have the methodology and possibly necessary scripts posted to the repo.

HassebJ commented 7 years ago

@mmozum How about we try and make sense of these benchmarks together? Have you been able to make any progress ?

raj2261992 commented 7 years ago

I am also not able to understand the results of seen.txt and update.txt and how we can use it to generate report. I ran the storm benchmark with 10000 load and for 100 seconds, also I have monitored it through storm UI. Below are some points which I have observed:- 1)As per my understanding seen.txt shows only number of events per campaign window after filter phase e.g if spout emits 1000000 records then only 30-33% of events would be emitted after filter phase and rest irrelevant events would be ignored. suppose if there are total 1000000 records then total count of events of all campaign window in seen.txtwould be 30%(i.e. 300000) of it.

2)Count of values in seen.txt and update.txt would be benchmark duration(number of campaigns/window duration). e.g 100s(100/10)=1000 in our case.

3)So the way you plot the given graph for processing time vs percent of tuples processed might be the percentage value(like 1,20,25,50 in your case) of total records which emitted after filter phase taken from seen.txt and its corresponding latency in updated.txt.

Please let me know if my understanding is correct or not?

supunkamburugamuve commented 7 years ago

I also try to make sense of the update.txt.

It gives the following according to code. This value is for the last event for a time window.

window.last_updated_at – window.timestamp

The documentation says, in order to calculate the latency we need to subtract window.duration from the above value in update.txt. The problem is this gives negative values for latency.

window.final_event_latency = (window.last_updated_at – window.timestamp) – window.duration

I'm not event sure the way they calculate latency is correct. Because they use the currentMillis in the tasks and the timestamp generated by the client.

According to code, the window.last_updated_at also written by a thread, which runs every 1 second. So I'm not convinced this is correct.

apsankar commented 7 years ago

Gentle bump, has anyone figured out what seen.txt and updated.txt are supposed to mean, and how to draw the graphs from them?

Yitian-Zhang commented 6 years ago

I am also confused about the meaning of the values in seen.txt and updated.txt. Any answer is helpful.

jm9e commented 5 years ago

I had the same troubles but I think I can shed some light on that.

We used the Streaming Benchmark to see how our solution performs and wanted to compare the results with those of Apache Spark and Apache Storm. For that, of course we needed to interpret the results correctly.

How it works

How we approximated the actual latency

The illustration below shows how the values in updated.txt and seen.txt should be interpreted.

We approximated the actual latency by this formula:

actual_latency = (updated_txt_value - 10000) + (10000 / seen_txt_value)

The problem here is that the time span between the last event and the window end will get lost (gray). Of course, the above formula only works if you assume evenly distributed events over time. Also, the fact that the measurement is distorted by only updated each second remains an issue. However, this applies to all tested frameworks so comparing results still tells differences in latency.

I hope this helps a bit.

If you are interested in how we used this benchmark check our blog post about it: https://bitspark.de/blog/benchmarking-slang