ariadne-cps / ariadne

C++ framework for rigorous computation on cyber-physical systems
http://www.ariadne-cps.org
GNU General Public License v3.0
28 stars 9 forks source link

Profiling framework #284

Open lgeretti opened 5 years ago

lgeretti commented 5 years ago

I think it would be good to set up a minimum framework for profiling purposes.

It should have the following:

  1. Ability to define a score metric, which is used along with the execution time metric to evaluate the performance
  2. Persistence on text-files, which are kept versioned in order to identify the performance on a given commit (I'd avoid producing multiple files for each run, it should be enough to compare with the latest committed version). It would be good though to be able to create separate files based on metadata information (OS, compiler, etc.) on the platform used for benchmarking, to support comparisons across different configurations/users.
  3. After running, print a summary of changes in respect to the versioned result on text file.

A possibility for handling a large library of results without polluting the repository too much would be to use a separate repository imported as a Git submodule. It would still be necessary to create a commit though.

lgeretti commented 3 years ago

Depends on #425

lgeretti commented 3 years ago

1) Rely on yml files to persist results, stored in a separate private Git repository in our ariadne-cps GitHub space. It uses libyaml-cpp and libgit2 for read/write access to the remote repository, but also to identify the current version of a profiling routine. 2) The machine running the profiling routine is uniquely identified by picking some system information, so that it is possible to compare only with the appropriate results 3) Also the routine is uniquely identified, using the file path in the index and the latest Git commit that changed it 4) Each routine can return multiple measures, hence each measure in that case is supplied with an identifier; we support the number of measures to change when the routine changes 5) Each file is associated with the routine+machine couple and is appended with each run of the routine on the machine for each measure. 6) Each run entry comprises the (a) commit hash of the whole repository, then summary information on 1 execution time. Given a specified number of tries as an input argument to the routine, we persist the (b) tries, (c) average execution time, (d) its minimum, (e) its maximum and (f) the standard deviation.
7) Comparisons can be made between a new run and the history of runs; if multiple measures are provided, comparisons are made for the same measure identifier. 8) We can evaluate a suite of routines with respect to a new commit, with some summary information on the increases and decreases for each, not unlike a test suite.