benchflow / data-transformers

Spark scripts utilised to transform data to the BenchFlow internal formats
Other
0 stars 0 forks source link

Small Code Changes to Apply or Document #93

Open VincenzoFerme opened 8 years ago

VincenzoFerme commented 8 years ago

Resulting data in the tables (refer to the BenchFlow.wfmsTest.1.1 run):

<responseTimes unit="microseconds">
            <operation name="isOutYet">
                <avg>589.741</avg>
                <max>3526.234</max>
                <sd>164.058</sd>
                <percentile nth="25" suffix="th">461168601842738.750</percentile>
                <percentile nth="50" suffix="th">461168601842738.750</percentile>
                <percentile nth="75" suffix="th">461168601842738.750</percentile>
                <percentile nth="90" suffix="th" limit="9223372036854776000.000">461168601842738.750</percentile>
                <percentile nth="95" suffix="th">461168601842738.750</percentile>
                <percentile nth="99.9" suffix="th">461168601842738.750</percentile>
                <passed>true</passed>
            </operation>
</responseTimes>

Testing:

Cerfoglg commented 8 years ago

@VincenzoFerme

  1. I don't know why network_interface_data is only 0s, it takes what's given by Docker, and if not that what nethogs reports.
  2. exp_byte_size contains the metrics computed over the database sizes of the trials of that experiment.
  3. Currently we don't compute metrics over the network data
  4. Status is also written in the summary file, if it's null it's because it's not reported somehow.
  5. I had no idea that was even a possibility, and frankly there is no way to take all possibilities into account when they are this unpredictable. It's either we have these well defined or not, or else it becomes difficult to manage.
  6. Those tables ended up empty because properties failed, and thus ram and cpu scripts failed to find the properties data (cpu core counts, max memory) they needed.
VincenzoFerme commented 8 years ago

@Cerfoglg

  1. Investigate. I verified the raw data, and they seems correct
  2. Ok, I forgot :)
  3. Ok, Openend https://github.com/benchflow/analysers/issues/95
  4. Where is this described on the faban documentation linked in https://github.com/benchflow/data-transformers/issues/52. I don't find it.
  5. Just check for the right structure for the percentiles. if the entity <percentile> has the <nth> attribute, you also need to take it into account to differentiate when you generate the name of the metric to store the value in the cassandra table. Otherwise you overwrite it.
  6. Yes, fine. This is a note for me, I'm going to check them after a new run
Cerfoglg commented 8 years ago

@VincenzoFerme I believe the last issue listed was cleared