Open rkitchen opened 4 years ago
Hey, Rob. Thanks for the feature request, and for your interest in ParallelCluster. It's particularly great to hear that you're enjoying the functionality introduced in v2.9.
This would be a difficult feature to implement in the short term. There are a few complicating factors, with the biggest one stemming from the fact that, in general, ParallelCluster manages resources via CloudFormation. Provisioning FSx instances dynamically would require us to replicate this functionality.
That said, better managing costs related to idle FSx file systems is a common customer request. I'll capture this request so we can use it as input when considering future changes.
There are a couple of workarounds you could use to save costs. The simplest solution is probably to utilize FSx's HDD-based file systems. The pricing information can be found here.
Another, more involved workaround would be to utilize hooks defined by slurm, such as the epilog and prolog scripts. This would provide the dynamic behavior you're requesting.
Currently creating/deleting entire clusters as a spend-thrift workaround is a little unwieldy.
What are your pain points here? Is there anything we can do here to make this less unwieldy? Is it mostly the time it takes for the delete/create cycle?
Regarding the following from my first response:
There are a couple of workarounds you could use to save costs. The simplest solution is probably to utilize FSx's HDD-based file systems. The pricing information can be found here.
I should add that, unfortunately, we don't support the creation of HDD file systems as part of the creation of a cluster. We're hoping to provide this functionality in the near future. You should still be able to use HDD file systems by creating one outside of the pcluster
CLI and specifying that file system's ID as the fsx_fs_id parameter in the [fsx]
section of the config file.
Thanks for the swift reply!
I have to admit I wasn't totally sure whether the SCRATCH_1/2 were HDD or SSD, sounds like this is the latter which is good to know.
As for the pain-points, it is really just the auto-shutdown/scale-to-zero that is the most helpful. Before FSx it was really nice to have a fire-and-forget job queue where the only persistent cost was a small master node and small EBS volume (or two). Having even the cheapest FSx is I suppose not dissimilar to having a (e.g.) c5a.xlarge RI, which I'm wondering may actually be more functional as a master to host a large shared EBS volume and double up as a dev instance/playground...
Thanks for considering the request and for the detailed and thoughtful response. I used to use starcluster back in the day and it is great to see pcluster being so actively developed. It feels a little silly quibbling over $30/month but in the non-profit/academic world it is really nice to be able to burst costs/usage to align with particular projects/grants - makes the accounting much easier.
HDD support has been added with: https://github.com/aws/aws-parallelcluster/pull/2137 and released as part of 2.10.0 release.
After struggling a little with NFS bandwidth issues with EFS/EBS scratch space for larger clusters, FSx has been working like a dream. Is there any concept however of only mounting FSx to the compute node(s) and having it be deleted when no compute nodes are running (maybe with its own [longer] timeout)? I wouldn't mind waiting a little longer for compute nodes to come up (if pcluster was also creating a new FSx) as a compromise to vastly reduced monthly costs for bursty workloads. Currently creating/deleting entire clusters as a spend-thrift workaround is a little unwieldy.
Also, I love the new SLURM queues - super intuitive and functions well - awesome work, thank you!