h2oai / db-benchmark

reproducible benchmark of database-like ops
https://h2oai.github.io/db-benchmark
Mozilla Public License 2.0
324 stars 87 forks source link

Spark first time longer #28

Closed mattdowle closed 6 years ago

mattdowle commented 6 years ago

Comment from Michael on Twitter here :

https://twitter.com/michael_chirico/status/1039356873760112641

Noticing that the first run of the first Spark benchmark is slow... I assume it's including the start-up time of the cluster?

It seems proportional to the data size, though. What's happening there and is there a way to isolate it and report separately perhaps.

jangorecki commented 6 years ago

To clarify @MichaelChirico concerns, first run on Spark does not include start-up time of cluster (in this case a single node cluster). Cluster is already started and data were read into it, cached into memory.

Already had little discussion on that with @st-pasha and problem is not trivial to resolve.

The non trivial parts are:

As for now I am not seeing reasons good enough, and strategy fair enough, to include warming up solutions for "groupby" task. What looks to be proper way to address that is to add new task for grouping on warmed-up/analyzed/sorted/indexed data.

mattdowle commented 6 years ago

The dataset is being loaded from file I think. Could it be that Spark is very fast at file load but isn't materializing the data. Then when the first group by comes along, that's when it actually does the load from file. (Adding load times to the report was on the todo list regardless.) If lazy data ingest doesn't explain it, can an issue be raised in spark SO tag or code-review site to see if they know.

MichaelChirico commented 6 years ago

@mattdowle I'm not sure whether this is what's going on, but yes, operations are generally lazy in Spark.

This code will be almost instant:

spark.read.parquet('s3://path/to/folder')

Even adding some filtering & basic things will do nothing.

Can force-overcome lazy eval by doing something inexpensive like:

SDF = spark.read.parquet('s3://path/to/folder')
SDF.count()

Open to debate whether something like SDF.cache() is legit for comparison

jangorecki commented 6 years ago
jangorecki commented 6 years ago

Solved in a26b8afffb9ab1051c4c2ffc190cb60f4a424cee