nextflow-io / nf-nomad

Hashicorp Nomad executor plugin for Nextflow
https://nextflow-io.github.io/nf-nomad/
Apache License 2.0
2 stars 3 forks source link

Mount volume spec #22

Closed jagedn closed 7 months ago

jagedn commented 7 months ago

close #13

This PR implements a new volume config where the user can specify the type and the nameof the volume

Valid types are host, csi and docker and the service create a JobSpec mounting it

dockerVolume is deprecated so the user must to use new configuration if want to mount a docker volume

We've agreed only a volume is required, at least by the moment

abhi18av commented 7 months ago

Adding some context here regarding the current syntax for specifying the volume


nomad {
    client {
        address = "http://NOMAD_IP:4646"
        token = ""
    }

    jobs {
       datacenters = "sun-nomadlab"
       volume = { //final version is a closure, not a map
           type "csi" 
           name "csi-volume-name" 
       }
    }
}
jagedn commented 7 months ago

Once we have the "asciidoctor/gh-pages/user guide" ready it will be a "requirement" to document this kind of stuff on it

abhi18av commented 7 months ago

@jagedn , that makes sense.

Just adding a thought for future reference, what if we want to allow the mounting multiple volumes within the same job?

In that case, wouldn't it be easier to have Map oriented syntax, which can compose multiple volume-maps into an array specification. As shown below

    jobs {
       datacenters = "sun-nomadlab"
       volume = [
                  [
                     type: "csi"  // CSI volume type - a volume shared across multiple client nodes
                     name: "csi-volume-name" 
                  ],
                  [
                     type: "host"  // HOST volume type - maybe nomad client node specific block storage
                     name: "host-volume-name" 
                  ],
                ]
    }

This way we would keep the possibility of (multiple) HOST + CSI volumes into the same job definition and therefore allow a mix-n-match of node-specific and shared file systems for each task.

What do you think? 🤔