In ePIC we (will) have to make decision about where in the algorith graph to add transfers to different facilities.
Right now, in simulations, we run everything on a single site, so this isn't necessary yet. However, it would be useful to be able to evaluate not just the number of algorithm calls or the time spent in each algorithm, but also the average data flow rates (e.g. kB/ev) for each link.
This could/would help us make the decision where a transfer from running on echelon 1 (fast algorithms with high input data flow rate, e.g. hit reconstruction through clustering), and where to ship offsite and run on echelon 2 (slow algorithms with low input data flow rate, e.g. full event reconstruction, vertexing).
In ePIC we (will) have to make decision about where in the algorith graph to add transfers to different facilities.
Right now, in simulations, we run everything on a single site, so this isn't necessary yet. However, it would be useful to be able to evaluate not just the number of algorithm calls or the time spent in each algorithm, but also the average data flow rates (e.g. kB/ev) for each link.
This could/would help us make the decision where a transfer from running on echelon 1 (fast algorithms with high input data flow rate, e.g. hit reconstruction through clustering), and where to ship offsite and run on echelon 2 (slow algorithms with low input data flow rate, e.g. full event reconstruction, vertexing).