Open bzz opened 6 years ago
optimize, in order to utilize that resource better (i.e in case of throughput - have more executor JVMs running on the same machine)
how do to that? we don't control spark cluster.
how do to that? we don't control spark cluster.
let's measure, identify and document the bottleneck first, set preliminary expectations on resources for 100k and then discuss the possible options that we might have i.e this can be powerful argument for changing https://github.com/src-d/charts/tree/master/spark to Apache Spark on k8s.
We would be able to improve the performance expectation model, based on more data later on.
Thanks for keeping it updated!
BTW, super-nice issue description and example how to reproduce 👍
Engine issue is resolved in https://github.com/src-d/engine/releases/tag/v0.5.1
yep. But the engine api has changed a bit. We need to update gemini.
Run gemini on new 1k dataset with new engine. And it works!!!!
The bad new is timing: 24 min. I don't really know how to profile it, but I saw that only 1 job is taking much time, most probably there is 1 huge repo.
10k has failed with https://github.com/src-d/engine/issues/332
currently is blocked by https://github.com/src-d/engine/issues/336
To move this forward, as DR team is super-busy now, can we please submit a PR to engine that just logs RevWalkException
without failing, same way as MissingObjectException is handled and run Gemini with this custom built version of Engine from this PR to avoid waiting for a release?
@carlosms could you please check if https://github.com/src-d/engine/pull/347 solves the issue and allows us to move forward with https://github.com/src-d/gemini/issues/42 ?
If that PR is tested on real data and solves the issue - it may be worth posting this information on the PR as well.
Engine 0.5.7 was release 🎉 with many bug fixes and discussion like https://github.com/src-d/minutes/pull/210/files#diff-a0ec2b18d53b6bebfc2a342ed864a52fR34 should rise the priority of finishing running Gemini file duplication up to PGA sizes.
Title and description are updated to represent the current goal.
10k repos are processed successfully with engine 0.5.7. Full PGA is failing with OOM with default params. Need to tune them.
Plan is:
PGA is downloading to the pipeline HDFS cluster on hdfs dfs -ls hdfs://hdfs-namenode/pga/siva/latest
.
WIP by pga-alex
pod with pga get -v -j 32 -o hdfs://hdfs-namenode:8020/pga 2>&1 | tee -a /go/pga-1.log
At this rate it will take ~25h to get there.
PGA download is finished 🎉 but it's a bit :suspect: as only 2.4Tb not 2.7Tb as the rumor has it to be. Would verify PGA integrity first with https://github.com/src-d/datasets/issues/53
Pre-conditions for running new Gemini on pipeline staging Apache Spark cluster:
blocked by src-d/backlog#1266
Full PGA was downloaded to HDFS 🎉 https://github.com/src-d/datasets/issues/53#issuecomment-396528917
$ zgrep -o "[0-9a-z]*\.siva" ~/.pga/latest.csv.gz | sort | uniq | wc -l
239807
$ hdfs dfs -ls -R hdfs://hdfs-namenode/pga/siva/latest | grep -c "\.siva$"
239807
Plan
./report
using DB from the hash above ~6hBlocked, as all Feature Extractors are deployed under https://github.com/src-d/issues-infrastructure/issues/184 are part of new, separate Apache Spark cluster in a different k8s namespace -n feature-extractor
, that does not seem to have access to HDFS 😕
Hash has finished successfully, I'm submitting PRs now to Gemini that enabled it.
Report is
cc.makeBuckets()
40minReport.findConnectedComponents()
~6h1h for hashing a ~1/250 of PGA on 3 machines of pipeline staging cluster
time MASTER="spark://fe-spark-spark-master:7077" ./hash -v \
-k dockergemini4 \
-h scylladb.default.svc.cluster.local \
hdfs://hdfs-namenode.default.svc.cluster.local/pga/siva/latest/ff | tee hash-pga-ff-4.logs
Feature Extraction exceptions
Processed: 4304060, skipped: 120
- TimeoutException -> 109
- StatusRuntimeException -> 11
real 61m2.058s
ERROR SparkFEClient: feature extractor error: java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
ERROR SparkFEClient: feature extractor error: io.grpc.StatusRuntimeException: INTERNAL: Exception deserializing request!
FATAL vendor/golang.org/x/text/unicode/norm/tables.go: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (4323138 vs. 4194304)
WARN Bblfsh: FATAL src/main/java/org/yardstickframework/BenchmarkServerStartUp.java: EOF
WARN Bblfsh: FATAL xs6/extensions/crypt/crypt_ghash.js: message is not defined; unsupported: non-object root node
WARN Bblfsh: FATAL vendor/golang.org/x/text/encoding/charmap/tables.go: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5617479 vs. 4194304)
$ kubectl exec -it scylladb-0 -- /bin/bash
$ cqlsh
use dockergemini4;
select count(1) from meta;
127379
select count(1) from hashtables;
426560
Thanks a lot for the detailed results, @bzz!
Question: how are we sampling the repos for each of these tests?
Question: how are we sampling the repos for each of these tests?
Good question. We always just used only a single shard of PGA dataset - all the repos, who's siva file names start with prefix /ff/
.
Overall, on Apache Spark performance depends on data distribution A LOT, so attaching .siva file size distribution histogram in 10mb buckets
hdfs dfs -du hdfs://hdfs-namenode/pga/siva/latest/ff/ | grep "\.siva$" | awk -v "size=100048576" -f hist.awk
0 100048576 912
100048576 200097152 17
200097152 300145728 2
300145728 400194304 1
400194304 500242880 1
500242880 600291456
600291456 700340032
700340032 800388608
800388608 900437184
900437184 1000485760 1
1000485760 1100534336 1
Local: 1mb, 30k features Cluster: 170Mb, 5.5mil features
local: 8sec, cluster: 4sec
val freqDf = features.withColumnRenamed("_1", "feature").withColumnRenamed("_2", "doc")
.select("feature", "doc")
.distinct
.groupBy("feature")
.agg(count("*").alias("cnt"))
.map(row => (row.getAs[String]("feature"), row.getAs[Long]("cnt")))
.collect().toMap
local: 4sec, cluster: 5s
val freq = features.rdd
.map { case (feature, doc, _) => (feature, doc) }
.distinct
.map { case (token, _) => (token, 1) }
.reduceByKey(_ + _)
.collectAsMap()
DataFrame API does not seem to change performance much, but still has nice benefit of uniform API.
There are 141 .siva files bigger then 1Gb, with rest 260+k being smaller. Those outliers can be moved, to get shorter tail of task execution time on average
After moving biggest files, jobs fail with
org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow. Available: 0, required: 32343809.
After setting spark.kryoserializer.buffer.max=1g
jobs fail with
tech.sourced.siva.SivaException: Exception at file 022c7272f0c1333a536cb319beadc4171cc8ff6a.siva: At Index footer, index size: Java implementation of siva doesn't support values greater than 9223372036854775807
which at this point might indicate broken .siva files on pga get
@bzz here is your issue: https://github.com/src-d/engine/issues/414 different file but the same error.
Simple processing of full PGA from /pga2
\w Engine finished in 59.1 h
, using 16cores/8Gb RAM on 9 machines on staging pipeline cluster 🎉
Removing outliers, ~140 .siva files (of ~270k) which are >1Gb each, would speed it up x2-3 times.
Caching all files on-disk in Parquet fails though, \w
Job aborted due to stage failure: Total size of serialized results of XX tasks (1025 MB) is bigger than spark.driver.maxResultSize (1024.0 MB)
This happens due to the fact that DF API for a String column keeps in-memory the longest string, which is a full file content and is more then 1Gb.
On-disk Parquet cache was failing due to the number of tasks ~40k beeing to high for our clusted configuration, was fixed by reducing the number and can proceed over full PGA (~50h) 🎉 but is failing at the end now 😖 with
Job aborted due to stage failure: Task 49 in stage 4.1 failed 4 times, most recent failure:
Lost task 49.3 in stage 4.1 (TID 27912, 10.2.56.116, executor 1):
java.io.FileNotFoundException: /spark-temp-data/spark-fb7fbf1e-033e-4122-9464-16acdc52fe34/executor-801bb555-6e93-4c4e-b3f8-46bc33ca9639/blockmgr-309ecee9-419e-45ab-a595-03dc3157b641/26/temp_shuffle_2eddf3bb-b5c0-488c-b397-47e4e4921a32
(No such file or directory)
Document in README the resources, needed to successfully process 1k, 2k, 10k, 100k and whole PGA of the .siva files.
So good start would be