-
```
/opt/nginx-1.2.2/nginx_upstream_jvm_route/ngx_http_upstream_jvm_route_module.c:
In function ângx_http_upstream_init_jvm_route_rrâ:
/opt/nginx-1.2.2/nginx_upstream_jvm_route/ngx_http_upstream_jvm_…
-
On our cluster, users started to use grpmax to run simulations.
Other tools using the gpu's are running fine, these nodes contain multiple gpu's.
When running gprmax with gpu support (pycuda) we'v…
-
Slurm has numerous optional arguments for both sbatch and srun. Rather than trying to enumerate all of them with their own keys in the batch/step blocks, I'd like to propose adding a way to pass thro…
-
Thank you for taking the time to submit an issue!
## Background information
When we run program under Open MPI with TCP network, it always reports error of connection reset by peer as below:
…
-
```
/opt/nginx-1.2.2/nginx_upstream_jvm_route/ngx_http_upstream_jvm_route_module.c:
In function ângx_http_upstream_init_jvm_route_rrâ:
/opt/nginx-1.2.2/nginx_upstream_jvm_route/ngx_http_upstream_jvm_…
-
In case of a machine with two CPU processors (nodes) and three GPUs, such that two of the GPUs are connected to one processor and the third GPU connected to the other processor, when I allocate three …
-
Hi!
I am grying out this tool now. I installed via conda, and am using it on a slurm run cluster with srun. Have asked for 10 cpus. I have a file list containing 8 plasmid fasta seqs. The plasmids …
-
I was trying to get onto the second layer of an HPC after SSH-ing with some Slurm command (srun), as was generally described in #1722. I tried the newest `RemoteCommand` from the Pre-release version …
-
A multi-task mpi-hello-world program run under a single-node flux instance with the openmpi 3.0.1 installed in /usr/tce hangs until the psm2 initialization times out.
Backtrace from pid 17986{2,4}:…
-
For the exact same setup, a Pop III simulation on Setonix GPUs show a very different evolution as compared to CPUs. The initial density projections on both are identical:
![plt00000_Projection_z_ga…