-
I tried to run the following script, but getting errors (see below)
#!/bin/sh
#SBATCH --job-name=Test_MQ
#SBATCH --output=Slurm_Scripts.out
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=32
#SBATCH …
-
Dear developers,
My CASSCF calculation crashes with "Max size reached in AugHess" error when i use more than one MPI process. However using the same input the CASSCF calculation completes smoothly …
-
**Description**
The split of the first argument (X in the following example) gets changed from None to 0 during matrix multiplation. Example:
**To Reproduce**
Steps to reproduce the behavior:…
-
Add MPI or coarray fortran parallelization. This is a fairly large issue, but is tractable.
-
Hello Mikael,
I have figured out how to go between fftn/ifftn and rfftn/irfftn and I think this is working nicely.
I am now trying to figure out how to use cosine and sine transforms. My firs…
-
**Submitting author:** @ShubhadeepSadhukhan1993 (Shubhadeep Sadhukhan)
**Repository:** https://github.com/ShubhadeepSadhukhan1993/fastSF
**Version:** v1.0.0
**Editor:** @jedbrown
**Reviewers:** @c…
-
I have been using NRHybSur3dq8 with gwsurrogate version 0.9.9 for several months on a Python 3.7 conda environment, and that environment is unable to install pymultinest because of conflicts with lots…
-
-
### Preamble
I am moving the discussion about SIMD that started in #716 here and adding hybrid parallelization.
The two topics go hand in hand since both (SPMD and SIMD) consist of processing mult…
-
I am using mpirun to run my Python program across multiple nodes in a cluster. Each instance of the program uses MPI to determine it's own rank and the number of processes but nothing else. Each progr…