Open stefstef00 opened 3 months ago
Created a method to evaluate benchmarks. The currently implemented features include:
< specified_path >/ environment.txt Environment variables statistics.txt Statistics per benchmark benchmarks/ Problem results per benchmark < benchmark 1 >.txt ..Problem results benchmark 1 < benchmark 2 >.txt ..Problem results benchmark 2 ...
just a thought -- it might be useful to consider building (some?) of this functionality over an existing tool for running large scale experiments, like DrWatson.jl
Created a method to evaluate benchmarks. The currently implemented features include: