nchammas / flintrock

A command-line tool for launching Apache Spark clusters.
Apache License 2.0
637 stars 116 forks source link

Separate Configurations for Head and Worker Nodes #199

Open PiercingDan opened 7 years ago

PiercingDan commented 7 years ago

It would be good to have separate configurations for head and worker nodes, for example, separate instance types, AMIs, and EBS volume sizes for head and worker nodes.

EDIT: I see a pull request is currently underway for separate master/worker instance types. Perhaps separate AMIs and EBS volume sizes would also be doable.

nchammas commented 6 years ago

Related: #166, #241.

pferrel commented 5 years ago

+1

You can't create just a master with slaves = 0. Otherwise you could create the master with one instance type then add-slaves with another.

devyn commented 4 years ago

I would really like this because small clusters don't need a ton of compute resources for the master. You could drive 4 slaves of a relatively large instance type with a master of a much smaller instance type depending on your application and be totally fine.

Edit: it also seems like it would be quite easy to implement compared to colocating one slave on the same machine as the master, which should probably be optional because I would imagine that in clusters with hundreds of slaves, the master does need to have some beef to it.