BlueBrain / neuron-docker-spack

Spack Based Docker Image for NEURON Simulations
Apache License 2.0
1 stars 2 forks source link

Low performance using HDF5 MPI parallel dependency #3

Open antonelepfl opened 5 years ago

antonelepfl commented 5 years ago

This ticket is manly for Nuvla setup

Hi, I've been testing the performance and even though it seems like it writes in parallel at the end when I select with reports the cpus don't work at 100% during the simulation process making the simulation in general work slower than before.

Test with: Hippocampus Microcircuit (O1/20181114)

250 cells of cental column

Nodes Numprocs Cells ms
1 8 250 300

BB5

Module No Reports With Report Neuron Version Module
Old (serial) 06:57 07:44 7.6.2-2 2018-08-24 neurodamus/hippocampus
New (parallel) 08:00 07:15 7.6.2-2 2018-08-24 neurodamus-hippocampus

DOCKER (OpenStack VM)

Module No Reports With Report Neuron Version Module
Old (serial) 19:09 19:30 7.6.2-3 2018-08-28 neurodamus/hippocampus
New (parallel) 20:37 20:32 7.6.2-28 2018-10-18 neurodamus-hippocampus

DOCKER (Nuvla)

Module No Reports With Report Neuron Version Module
Old (serial) 12:15 22:33 7.6.2-3 2018-08-28 neurodamus/hippocampus
New (parallel) 13:52 32:45 7.6.2-28 2018-10-18 neurodamus-hippocampus

To test the raw read/write performance with the help of Judit we used IOR

Machine File Read Write NFS
OpenStack testFile.ior 2100 400 NO 8 CPU / 4G RAM
Nuvla Worker testFile.ior 170 4 YES
Nuvla WorkerS testFile.ior 440 2.5 YES

Where WorkerS means 4 process 2 in each worker VM with MPI

pramodk commented 5 years ago

@jplanasc : I assume IOR with MPI I/O backend was used? @antonelepfl : could we meet next week to go through this?

cc: @iomaganaris

antonelepfl commented 5 years ago

Sure just let me know a day and we can discuss it. Thank you

jplanasc commented 5 years ago

If I remember correctly, MPI I/O backend was used for tests on BB5. I can't say for the others, @antonelepfl can probably answer.

antonelepfl commented 5 years ago

I have tested all systems with the same package (https://github.com/hpc/ior.git) As you probably are able to check in here /gpfs/bbp.cscs.ch/home/antonel/20181114/io_performance

But there is not a such a BIG difference neither in BB5 nor in OpenStack so if you consider that this change on the libraries should have bump the performance we can meet otherwise I guess it's fine.

We are not considering Nuvla anymore for the time being due that provider will change their architecture and this may change a lot.