yt-project / yt-4.0-paper

manubot-based repository for the yt 4.0 method paper
Other
11 stars 30 forks source link

Scaling results #133

Open matthewturk opened 1 year ago

matthewturk commented 1 year ago

Need examples of scaling of operations.

matthewturk commented 1 year ago

@Xarthisius I think you and possibly @cindytsai have looked at this. Would you be able to take this on and look at producing some simple scaling plots, even if they don't demonstrate great performance?

Xarthisius commented 1 year ago

I found something I did on 03/08/2023:

import time
import yt
from yt.config import ytcfg

yt.enable_parallelism()

ds = yt.load("/dpool/kacperk/second_star/DD0182/DD0182")
ds.index

start_time = time.time()
v, c = ds.find_max(("gas", "density"))
if yt.is_root():
    print(v, c)
    max_time = time.time() - start_time
    print("--- (max) %.2f seconds ---" % max_time)

start_time = time.time()
p = yt.ProjectionPlot(ds, "z", ("gas", "density"), width=(1.0, 'unitary'))

if yt.is_root():
    prj_time = time.time() - start_time
    print("--- (prj) %.2f seconds ---" % prj_time)
    nprocs = int(ytcfg.get("yt", "internals", "global_parallel_size")) 
    with open("results.csv", "a") as fp:
        fp.write("%i,%.2f,%.2f\n" % (nprocs, max_time , prj_time))
    p.save()

Run via:

#!/bin/bash

for i in {1..10}; do
   for n in 1 2 4 8 16 32 ; do
       mpiexec -n $n -bind-to core python canary.py --parallel
   done
done

on Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz (20 cores, but 40 with HT)

Results: results.csv

You explained to me why max was so slow and not scaling on slack, but it's lost in +30d history and I don't remember.