Closed backeb closed 2 years ago
Regarding the action:
See below feedback from Deltares:
Hi Bjorn,
Yes, this is fine by me. The kernel release is in ~2 weeks. Anna knows where to find the image 😊 We will be happy to support them get things up and running because it gets us useful feedback.
Anna van Gils I assume you are the technical point of contact for them? Feel free to get in touch with Adri when you have technical questions setting it up. If you/they get blocked, just set up a call with us (please include me as optional. If my agenda is limiting ignore it please).
As a system requirement: Can you ask them to ensure Intel MPI is installed on their cluster. We are using version 2021.2.0.216 ourselves but a version close to that should be fine.
Akhil Piplani Project Manager | Deltares
@avgils we can go ahead and share the Singularity image with GRNET
cc @yan0s @nikosT @sebastian-luna-valero @lorincmeszaros
The nfs-server is setup with 300TB as a starting volume
you can mount it using mount 192.168.0.85:/nfs-volume
Regarding the action
I was advised that we (the users, i.e. @avgils) can use the PaaS Orchestrator (https://indigo-paas.cloud.ba.infn.it/home/login) to easily deploy a Kubernetes cluster on GRNET. However, before we can use the PaaS Orchestrator, the HiSea VO needs to be configured for it, which is an action for INFN (i.e. @gdonvito and his team).
@gdonvito could you perhaps progress on configuring the HiSea VO for the PaaS Orchestrator? Note that the VO will change to hisea.c-scale.eu
cc @avgils @lorincmeszaros @sebastian-luna-valero @enolfc @sustr4 @kkoumantaros
Delft3D FM is now runnning in Singularity on the GRNET HPC. Further actions are:
See also @lorincmeszaros compare container performance between different set ups (e.g. using MPI library in Singularity container vs using MPI library installed on HPC) #14
192.168.0.85:/nfs-volume <path_to_mount>
similarly you can add the following line in fstab to mount in on boot by adding the following line in the end
192.168.0.85:/nfs-volume <path_to_mount> nfs4 defaults 0 0
10-14 Jan 2022
Thanks for the great summary.
Please add me to the conversations about the INDIGO PaaS Orchestrator.
Sprint 3: 22-26 November 2021
Sprint activities
[x] Deploy Delft3D FM Singularity container on GRNETs HPC (@avgils @lorincmeszaros)
[ ] Test performance of Singularity container using MPI library inside container vs using library on HPC (@avgils @lorincmeszaros) See also #14
[ ] Progress towards setting up a Kubernetes cluster to configure and automate the workflow (@sebastian-luna-valero @kkoumantaros)
[x] Progress on setting up NFS server (network storage) and adjust data downloads scripts to download to NFS server (@yan0s @nikosT @avgils @lorincmeszaros)
[ ] Find secure place to store
.cdsapirc
file for the C-SCALE account on the Climate Data Store (@sebastian-luna-valero) See also https://github.com/c-scale-community/use-case-hisea/issues/13#issuecomment-954703322For additional information see notes from Sprint 2 retro: https://github.com/c-scale-community/use-case-hisea/issues/13#issuecomment-954780477
Background, high-level objectives and status
We want to compare performance and scalability of 4 different architecture options:
Fully cloud based + boundary data is accessed through provider's datastore (easy access to data) (option to avoid downloading data) Status: The scripts to download the requisite data have been prepared and containerised. High level discussions are ongoing at GRNET regarding the provisioning of an NFS server.
Pre- and post-processing in the cloud + model running on HPC Status: We have access to GRNETs HPC environment and in Sprint 3 will deploy the Singularity container. We will need to progress on the managed Kubernetes cluster set up and Argo Workflow installation to achieve this test.
Fully HPC based + boundary data downloaded / accessed Status: We have access to GRNETs HPC environment and in Sprint 3 will deploy the Singularity container. We will need to adjust the download, pre and post processing scripts to work operationally / automatically in the HPC environment.
AOB