Open VincenzoFerme opened 8 years ago
@VincenzoFerme What's described here is all implemented, but https://github.com/benchflow/analysers/issues/83
@Cerfoglg document here the final set of metrics. As for example the integral
and the efficiency
are missing, as well as for example the following ones:
Start from your thesis.
@ivanchikj how did we define and why the aggregate metrics at experiment level for the efficiency
?
For the CPU efficiency on experiment level we have defined the aggregate metrics as (T1
, T2
and T3
are the trials):
contribution T1 + contribution T2 + contribution T3
Usage Efficiency Tx * Weight Tx
cpu_integral_Tx / (cpu_max_Tx * number_data_points_Tx)
number_data_points_Tx / (number_data_points_T1 + number_data_points_T2 + number_data_points_T3)
We apply the weighted average for CPU and RAM.
In the following I describe the experiment level metrics and statistics we should implement as Spark scripts. They are open for discussion and extension in this thread.
Some background on the type of data we have
We perform different trials for the same experiment, by making sure the environment in which we execute the experiment is stable across the trials and we ensure that the initial conditions are always the same. This means we have a pretty stable behaviours among the different runs, hence pretty similar performance measures.
Metrics and Statistics
ToDos