I am running spark-perf on a scale-up node with v1.6.0 in stand-alone mode, and I'm having trouble fine tuning the runs. The problem I am currently facing is that I'd like to run multiple workers (smaller JVM heaps) on a single node. But, no matter how I am configuring spark-perf I see:
1 master, N worker instances
N coarse-grained executors
1 JVM running the actual test
So, I am able to get multiple worker instances fired up, but I only see one JVM running an actual test (e.g. glm-regression). Am I messing up or missing a configuration option? The other issue I'm curious about is the coarse-grained executors. I have not seen them running before any time that I've used stand-alone mode.
I am running
spark-perf
on a scale-up node with v1.6.0 in stand-alone mode, and I'm having trouble fine tuning the runs. The problem I am currently facing is that I'd like to run multiple workers (smaller JVM heaps) on a single node. But, no matter how I am configuringspark-perf
I see:So, I am able to get multiple worker instances fired up, but I only see one JVM running an actual test (e.g.
glm-regression
). Am I messing up or missing a configuration option? The other issue I'm curious about is the coarse-grained executors. I have not seen them running before any time that I've used stand-alone mode.Thanks.