Open antonelepfl opened 5 years ago
@jplanasc : I assume IOR with MPI I/O backend was used? @antonelepfl : could we meet next week to go through this?
cc: @iomaganaris
Sure just let me know a day and we can discuss it. Thank you
If I remember correctly, MPI I/O backend was used for tests on BB5. I can't say for the others, @antonelepfl can probably answer.
I have tested all systems with the same package (https://github.com/hpc/ior.git)
As you probably are able to check in here /gpfs/bbp.cscs.ch/home/antonel/20181114/io_performance
But there is not a such a BIG difference neither in BB5 nor in OpenStack so if you consider that this change on the libraries should have bump the performance we can meet otherwise I guess it's fine.
We are not considering Nuvla anymore for the time being due that provider will change their architecture and this may change a lot.
This ticket is manly for Nuvla setup
Hi, I've been testing the performance and even though it seems like it writes in parallel at the end when I select with reports the cpus don't work at 100% during the simulation process making the simulation in general work slower than before.
Test with: Hippocampus Microcircuit (O1/20181114)
250 cells of cental column
BB5
DOCKER (OpenStack VM)
DOCKER (Nuvla)
To test the raw read/write performance with the help of Judit we used IOR
Where WorkerS means 4 process 2 in each worker VM with MPI