Closed emepetres closed 3 years ago
Hi,
it depends on how the HPC provides some particular applications. In FT2 case, singularity is provides as a module but in other HPC can be a system application. If we talk about MPI libraries, these are usually accessible by the module system, but we cannot force a particular one because it is ('till the moment) strongly related with the container we want to run.
My proposal is:
{ "requirements":
{ "OpenMPI": "1.10.2",
"Singularity": "2.4.2",
...
}
}
[
{
"package": "OpenMPI",
"default": "1.10.2",
"versions": [
{ "id": "1.10.2",
"name": "openmpi/1.10.2",
"dependencies": ["gcc/5.3.0"],
},
{ "id": "2.2.1",
"name": "openmpi/2.2.1",
"dependencies": ["gcc/6.3.0"],
},
},
{
"package": "Singularity",
"default": "2.4.2",
"versions": [
{ "id": "2.3.1",
"name": "singularity/2.3.1",
"dependencies": [ ],
},
{ "id": "2.4.2",
"name": "singularity/2.4.2",
"dependencies": [ ],
},
},
...
]
Of course the format can be different but, unfortunately, does not exist any standard for showing and providing a list of modules (different versions of LMOD show this info in different formats).
What do you think?
Create a way in which the blueprints will be generic and the actual modules will be generated dynamically acording to the infrastructure used. A list for each HPC similar to i18n?
Thougth: Using syngularity, could be possible that only singularity modules will be necessary? Maybe it is just a matter of setting the "init modules" for each HPC (and other infrastructures if neccessary, instead of modules, for example the VM image).