Then we'd try to get benchmarks from a particular machine (type) into separate folders. They'd be named either by a hash of the system data, or some more human-readable subset of the system data. I'd really like to get to a place where we can detect regressions on travis, at least when we hit previously-seen hardware.
The long-long-term goal would be some kind of benchmarking-helper library that we could use both in cheetah and other performance-sensitive projects.
openbenchmarking.org does a good job of recording system specs as benchmark metadata. http://openbenchmarking.org/result/1405134-PL-AIOLAPTOP47
I'd like to do the same for our benchmarks. Either transliterate their code to python (if it's simple enough) or vendor the php. https://gitorious.org/phoronix/phoronix-test-suite/source/master:pts-core/objects/phodevi/components
Then we'd try to get benchmarks from a particular machine (type) into separate folders. They'd be named either by a hash of the system data, or some more human-readable subset of the system data. I'd really like to get to a place where we can detect regressions on travis, at least when we hit previously-seen hardware.
The long-long-term goal would be some kind of benchmarking-helper library that we could use both in cheetah and other performance-sensitive projects.