DifferentiableUniverseInitiative / IDRIS-hackathon

Repository for hosting material and discussions for the 2021 IDRIS GPU hackathon
MIT License
2 stars 0 forks source link

Benchmarking 3D FFTs with NVIDIA Nsight Systems #2

Closed EiffL closed 3 years ago

EiffL commented 3 years ago

We have a small benchmark script that should allow us to test the scaling of distributed 3D FFTs with Mesh TensorFlow. But with @kimchitsigai we are so far running into issues when trying to get a profiler trace with nsys profile.

The script itself is located here and we try to run it with the following SLURM job:

#!/bin/bash
#SBATCH --job-name=fft_benchmark     # nom du job
##SBATCH --partition=gpu_p2          # de-commente pour la partition gpu_p2
#SBATCH --ntasks=8                   # nombre total de tache MPI (= nombre total de GPU)
#SBATCH --ntasks-per-node=4          # nombre de tache MPI par noeud (= nombre de GPU par noeud)
#SBATCH --gres=gpu:4                 # nombre de GPU par nœud (max 8 avec gpu_p2)
#SBATCH --cpus-per-task=10           # nombre de coeurs CPU par tache (un quart du noeud ici)
##SBATCH --cpus-per-task=3           # nombre de coeurs CPU par tache (pour gpu_p2 : 1/8 du noeud)
# /!\ Attention, "multithread" fait reference a l'hyperthreading dans la terminologie Slurm
#SBATCH --hint=nomultithread         # hyperthreading desactive
#SBATCH --time=00:10:00              # temps d'execution maximum demande (HH:MM:SS)
#SBATCH --output=fft_benchmark%j.out # nom du fichier de sortie
#SBATCH --error=fft_benchmark%j.out  # nom du fichier d'erreur (ici commun avec la sortie)
#SBATCH -A ftb@gpu                   # specify the project
#SBATCH --qos=qos_gpu-dev            # using the dev queue, as this is only for profiling

# nettoyage des modules charges en interactif et herites par defaut
module purge

# chargement des modules
module load tensorflow-gpu/py3/2.4.1+nccl-2.8.3-1

# echo des commandes lancees
set -x

# execution du code avec binding via bind_gpu.sh : 1 GPU pour 1 tache MPI.
srun --unbuffered --mpi=pmi2 -o fft_%t.log nsys profile --stats=true -t nvtx,cuda,mpi -o result-%q{SLURM_TASK_PID} python -u fft_benchmark.py --mesh_shape="b1:2,b2:4" --layout="nx:b1,tny:b1,ny:b2,tnz:b2"

unfortunately, this crashes for some reason before being able to return the full trace. And we are not sure why

kimchitsigai commented 3 years ago

I find 2 nsys , one is /gpfslocalsup/pub/anaconda-py3/2020.02/envs/tensorflow-gpu-2.4.1+nccl-2.8.3-1/bin/nsys and the other is /gpfslocalsys/cuda/10.2/bin/nsys with different sizes.

EiffL commented 3 years ago

aha! maybe that's it!

kimchitsigai commented 3 years ago

Or we may need CUDA 11:

image

(https://www.tensorflow.org/install/source#gpu)

EiffL commented 3 years ago

This script seems to work:

#!/bin/bash
#SBATCH --job-name=fft_benchmark     # nom du job
##SBATCH --partition=gpu_p2          # de-commente pour la partition gpu_p2
#SBATCH --ntasks=8                   # nombre total de tache MPI (= nombre total de GPU)
#SBATCH --ntasks-per-node=4          # nombre de tache MPI par noeud (= nombre de GPU par noeud)
#SBATCH --gres=gpu:4                 # nombre de GPU par nœud (max 8 avec gpu_p2)
#SBATCH --cpus-per-task=10           # nombre de coeurs CPU par tache (un quart du noeud ici)
##SBATCH --cpus-per-task=3           # nombre de coeurs CPU par tache (pour gpu_p2 : 1/8 du noeud)
# /!\ Attention, "multithread" fait reference a l'hyperthreading dans la terminologie Slurm
#SBATCH --hint=nomultithread         # hyperthreading desactive
#SBATCH --time=00:10:00              # temps d'execution maximum demande (HH:MM:SS)
#SBATCH --output=fft_benchmark%j.out # nom du fichier de sortie
#SBATCH --error=fft_benchmark%j.out  # nom du fichier d'erreur (ici commun avec la sortie)
#SBATCH -A ftb@gpu                   # specify the project
#SBATCH --qos=qos_gpu-dev            # using the dev queue, as this is only for profiling
# nettoyage des modules charges en interactif et herites par defaut
module purge
# chargement des modules
module load tensorflow-gpu/py3/2.4.1+nccl-2.8.3-1
# echo des commandes lancees
set -x
# JZ FIX
export TMPDIR=$JOBSCRATCH
ln -s $JOBSCRATCH /tmp/nvidia
# execution du code avec binding via bind_gpu.sh : 1 GPU pour 1 tache MPI.
srun --unbuffered --mpi=pmi2 -o fft_%t.log /gpfslocalsup/pub/idrtools/bind_gpu.sh nsys profile --stats=true -t nvtx,cuda,mpi -o result-%q{SLURM_TASK_PID} python -u fft_benchmark.py --mesh_shape="b1:2,b2:4" -
-layout="nx:b1,tny:b1,ny:b2,tnz:b2"
EiffL commented 3 years ago

So.... I also found that in some configurations I'm getting a crash from nsys at this point:

[r10i3n3:06773] *** Process received signal ***
[r10i3n3:06773] Signal: Segmentation fault (11)
[r10i3n3:06773] Signal code: Address not mapped (1)
[r10i3n3:06773] Failing at address: (nil)
[r10i3n3:06773] [ 0] /lib64/libpthread.so.0(+0x12dc0)[0x1523e7f79dc0]
[r10i3n3:06773] [ 1] /lib64/libc.so.6(+0x3c04a)[0x1523e724d04a]
[r10i3n3:06773] [ 2] /gpfs7kro/gpfslocalsys/cuda/10.2/nsight-systems-2019.5.2/target-linux-x64/libToolsInjectionOpenMPI64.so(MPI_Init_thread+0x2db)[0x1523e845b3ab]
[r10i3n3:06773] [ 3] /gpfslocalsup/pub/anaconda-py3/2020.02/envs/tensorflow-gpu-2.4.1+nccl-2.8.3-1/lib/python3.7/site-packages/mpi4py/MPI.cpython-37m-x86_64-linux-gnu.so(+0x326e5)[0
x1523677816e5]

which makes me think there might be an incompatibility at the MPI level with nsys

EiffL commented 3 years ago

Ok, so I think we figured out the problem there, it was that we had several versions of nsys cohabiting at the same time. This appears to have been mostly resolved by loading the nvidia-nsight-systems/2021.1.1 module.

Another thing we identified today is that we needed the extra lines to make nsys work nicely:

export TMPDIR=$JOBSCRATCH
ln -s $JOBSCRATCH /tmp/nvidia

I'm updating the demo scripts accordingly

EiffL commented 3 years ago

I'm gonna close this because this pretty much under control now