Open verdurin opened 4 years ago
If we consider the Lustre mountpoint to be created by a process external to Terraform, it is relatively straightforward to implement a lustre client class that mounts custom mountpoints.
Here is a quick draft of lustre.pp
that would be added to the site/profile/manifests
of the puppet-magic_castle repo:
class profile::lustre::client(Hash[String, Any] $mountpoints) {
yumrepo { 'aws-fsx':
name => 'AWS FSx Packages - $basearch',
baseurl => 'https://fsx-lustre-client-repo.s3.amazonaws.com/el/7/x86_64/',
enabled => 1,
gpgkey => 'https://fsx-lustre-client-repo-public-keys.s3.amazonaws.com/fsx-rpm-public-key.asc',
gpgcheck => 1,
}
yumrepo { 'aws-fsx-src':
name => 'AWS FSx Source - $basearch',
baseurl => 'https://fsx-lustre-client-repo.s3.amazonaws.com/el/7/SRPMS/',
enabled => 1,
gpgkey => 'https://fsx-lustre-client-repo-public-keys.s3.amazonaws.com/fsx-rpm-public-key.asc',
gpgcheck => 1,
}
package { ['kmod-lustre-client', 'lustre-client']:
ensure => installed,
require => Yumrepo['aws-fsx'],
}
$defaults = {
'ensure' => present,
'fstype' => 'lustre',
'options' => 'noatime,flock',
'require' => [
Package['kmod-lustre-client'],
Package['lustre-client']
],
}
file { keys($mountpoints):
ensure => 'directory',
mode => '0755',
}
create_resources(mount, $mountpoints, $defaults)
}
The mountpoints could be defined with the hieradata using the hieradata
variable in the main.tf.
Here is an example of a mountpoint definition using the preceding profile::lustre::client class:
profile::lustre::client:
'/lustre1':
name: '/lustre1'
target: 'fs-0bd8c38cb68312484.fsx.ca-central-1.amazonaws.com@tcp:/zaaanbmw'
'/lustre2':
name: '/lustre2'
target: 'fs-1ce9c38cb68312484.fsx.ca-central-1.amazonaws.com@tcp:/pdddmrmw'
You can then add profile::lustre::client
to each node definition that should mount the filesystems in site.pp
:
node /^[a-z0-9-]*node\d+$/ {
include profile::consul::client
include profile::base
include profile::metrics::exporter
include profile::rsyslog::client
include profile::cvmfs::client
include profile::gpu
include profile::singularity
include profile::jupyterhub::node
include profile::nfs::client
include profile::lustre::client # <- add Lustre custom mountpoints
include profile::slurm::node
include profile::freeipa::client
}
@cmd-ntrf many thanks for this.
Something that does occur to me is a complication with VPCs. As you know we need to specify a VPC when creating an FSx filesystem. Given that at the moment this needs to happen outwith Terraform and MC, and MC by default creates its own VPC for the cluster, I'm wondering how to handle the routing between the two.
A workaround for now would be to mount the filesystem manually, having created it specifically to use the VPC that MC has created.
Quick update: I am drafting a PR to add support for cloud provider filesystems using a new variable filesystems
.
An example for AWS is available here: https://github.com/ComputeCanada/magic_castle/blob/filesystems/examples/advanced/filesystems/aws/main.tf
The creation of filesystem resources is functional, what is missing is the Puppet code to mount the filesystem.
Related to #36 and re-directed here from the
software-stack
repo:We would like to supplement the EBS-backed NFS storage with FSx on AWS (lots of abbreviations there...).
Is there support currently for custom mountpoints, such that they would be added to new compute node instances as they are provisioned? If not, where should be start looking in the Puppet code?
I know you plan to add things like FSx in the future but hopefully this would be enough for us in the meantime.