-
I've written https://juliahpc.github.io/. Depending on how important you think this "HPC workflow" aspect is, we should probably either integrate some of the content into MJW or at least link to this …
-
Hi I have recently moved to using a new cluster. I am using version: ipyrad [v.0.9.95] on a cluster using 20 cores
ipyrad [v.0.9.95]
Interactive assembly and analysis of RAD-seq data
----…
-
**What would you like to be added**:
I'd like to support the MultiKueue for the plain pod the same as the Job and JobSet.
**Why is this needed**:
In general multi tenant clusters not for ML/HPC, we…
-
Any ideas on how to manage difference in HPC systems in the lessons? Here's a start for things to consider:
*hostname of login machine
*queueing system (and queue names, walltime limits, etc)
*use of…
-
README contains the following line:
`python -c 'import psiflow; psiflow.setup_slurm()`
but the module does not seem to have that function, however
`psiflow.setup_slurm_config()`
exist and seems t…
-
Is is possible to use a local HPC or GPU cluster? I understand that it works perfectly with AWS but what about when the use of AWS is not possible but there are other resources available? Can it be co…
-
Hi,
The uncertainpy calculation quickly goes out of control with increasing no. of uncertain parameters mainly in terms of memory requirement for any personal machines. I could be wrong but don't thi…
-
I am building the `Brunsli/0.1-GCCcore-13.2.0` module from source on our HPC clusters, and it depends on `libbrotlidec-static.a`. The latter static library is automatically created with Brotli version…
-
Hi all,
not an issue, but thought I would share a script I wrote that auto-creates the samples_metadata file from the files you have in your folder and also adds the reference genome file to the arg…
-
One datapoint is that locally on OpenMPI 5, the test ran fine on one GPU.
There was a discussion elsewhere (maybe @maleadt remembers) if that flag is still needed or what MPI versions can now hand…