Closed richardmward closed 1 month ago
Could you elaborate on what output format you would need? Ideally, the function benchmark
should take an instance of Emitter
and the output should be made customizable. The output could be written to a file or piped to a file.
Usage could be something like:
import 'package:benchmark_runner/benchmark_runner.dart';
class CustomEmitter extends ColorPrintEmitter {
void emitMean({required Score score}) {
print('Mean Standard deviation');
print(${score.stats.mean} ${score.stats.stdDev}');
}
}
void main(List<String> args) {
group('List:', () {
final originalList = <int>[for (var i = 0; i < 1000; ++i) i];
benchmark(
'construct | Custom emitter',
() {
var list = <int>[for (var i = 0; i < 1000; ++i) i];
},
emitter: CustomEmitter(),
report: (instance, emitter) => emitter.emitMean(
score: instance.score(),
),
);
benchmark('construct', () {
var list = <int>[for (var i = 0; i < 1000; ++i) i];
}, report: reportLegacyStyle);
});
}
It would be a breaking change since the class Benchmark
and the function benchmark
would become something like:
import 'dart:async';
import 'package:ansi_modifier/ansi_modifier.dart';
import 'package:benchmark_harness/benchmark_harness.dart'
show BenchmarkBase;
import '../extensions/benchmark_helper.dart';
import '../extensions/color_profile.dart';
import '../extensions/duration_formatter.dart';
import '../extensions/string_utils.dart';
import '../utils/stats.dart';
import 'color_print_emitter.dart';
import 'group.dart';
import 'score.dart';
/// A synchronous function that does nothing.
void doNothing() {}
/// Generates a report that includes benchmark score statistics.
void reportStats(Benchmark instance, ColorPrintEmitter emitter) {
emitter.emitStats(
description: instance.description,
score: instance.score(),
);
}
/// Generates a BenchmarkHarness style report. Score times refer to
/// a single execution of the `run()` function.
void reportLegacyStyle(Benchmark instance, ColorPrintEmitter emitter) {
instance.report();
}
/// Generic function that reports benchmark scores by calling an emitter [E].
typedef Reporter<E extends ColorPrintEmitter> = void Function(Benchmark, E);
/// A class used to benchmark synchronous functions.
/// The benchmarked function is provided as a constructor argument.
class Benchmark extends BenchmarkBase {
/// Constructs a [Benchmark] object using the following arguments:
/// * [description]: a [String] describing the benchmark,
/// * [run]: the synchronous function to be benchmarked,
/// * [setup]: a function that is executed once before running the benchmark,
/// * [teardown]: a function that is executed once after the benchmark has
/// completed.
const Benchmark({
required String description,
required void Function() run,
void Function() setup = doNothing,
void Function() teardown = doNothing,
ColorPrintEmitter emitter = const ColorPrintEmitter(),
}) : _run = run,
_setup = setup,
_teardown = teardown,
super(description, emitter: emitter);
final void Function() _run;
final void Function() _setup;
final void Function() _teardown;
// The benchmark code.
@override
void run() => _run();
/// Not measured setup code executed prior to the benchmark runs.
@override
void setup() => _setup();
/// Not measures teardown code executed after the benchmark runs.
@override
void teardown() => _teardown();
/// To opt into the reporting the time per run() instead of per 10 run() calls.
@override
void exercise() => _run();
/// Returns the benchmark description (corresponds to the getter name).
String get description => name;
({List<double> scores, int innerIter}) sample() {
_setup();
final warmupRuns = 3;
final sample = <int>[];
final innerIters = <int>[];
final overhead = <int>[];
final watch = Stopwatch();
var innerIterMean = 1;
try {
// Warmup (Default: For 200 ms with 3 pre-runs).
final scoreEstimate = watch.warmup(_run);
final sampleSize = BenchmarkHelper.sampleSize(
scoreEstimate.ticks,
);
if (sampleSize.inner > 1) {
final durationAsTicks = sampleSize.inner * scoreEstimate.ticks;
for (var i = 0; i < sampleSize.outer + warmupRuns; i++) {
// Averaging each score over at least 25 runs.
// For details see function BenchmarkHelper.sampleSize.
final score = watch.measure(
_run,
durationAsTicks,
);
sample.add(score.ticks);
innerIters.add(score.iter);
}
innerIterMean = innerIters.reduce((sum, element) => sum + element) ~/
innerIters.length;
} else {
for (var i = 0; i < sampleSize.outer + warmupRuns; i++) {
watch.reset();
_run();
// These scores are not averaged.
sample.add(watch.elapsedTicks);
watch.reset();
overhead.add(watch.elapsedTicks);
}
for (var i = 0; i < sampleSize.outer; i++) {
// Removing overhead of calling elapsedTicks and adding list element.
// overhead scores are of the order of 0.1 us.
sample[i] = sample[i] - overhead[i];
}
}
// Rescale to microseconds.
// Note: frequency is expressed in Hz (ticks/second).
return (
scores: sample
.map<double>(
(e) => e * (1000000 / watch.frequency),
)
.skip(warmupRuns)
.toList(),
innerIter: innerIterMean
);
} finally {
teardown();
}
}
/// Returns a [Score] object holding the total benchmark duration
/// and a [Stats] object created from the score samples.
Score score() {
final watch = Stopwatch()..start();
final sample = this.sample();
return Score(
runtime: watch.elapsed,
sample: sample.scores,
innerIter: sample.innerIter,
);
}
/// Runs the method [measure] and emits the benchmark score.
@override
void report() {
final watch = Stopwatch()..start();
final score = measure();
watch.stop();
final runtime = watch.elapsed.msus.style(ColorProfile.dim);
emitter.emit('$runtime $description', score);
print(' ');
}
}
/// Defines a benchmark for the synchronous function [run]. The benchmark
/// scores are emitted to stdout.
/// * `run`: the benchmarked function,
/// * `setup`: exectued once before the benchmark,
/// * `teardown`: executed once after the benchmark runs.
/// * `emitStats`: Set to `false` to emit score as provided by benchmark_harness.
void benchmark<E extends ColorPrintEmitter>(
String description,
void Function() run, {
void Function() setup = doNothing,
void Function() teardown = doNothing,
E? emitter,
Reporter<E> report = reportStats,
}) {
final group = Zone.current[#group] as Group?;
var groupDescription =
group == null ? '' : '${group.description.addSeparator(':')} ';
final instance = Benchmark(
description: groupDescription +
description.style(
ColorProfile.benchmark,
),
run: run,
setup: setup,
teardown: teardown,
emitter: emitter ?? ColorPrintEmitter(),
);
final watch = Stopwatch()..start();
try {
if (run is Future<void> Function()) {
throw UnsupportedError('The callback "run" must not be marked async!');
}
} catch (error, stack) {
reportError(
error,
stack,
description: instance.description,
runtime: watch.elapsed,
errorMark: benchmarkError,
);
return;
}
runZonedGuarded(
() {
try {
if (emitter == null) {
reportStats(instance, instance.emitter as ColorPrintEmitter);
} else {
report(instance, emitter);
}
addSuccessMark();
} catch (error, stack) {
reportError(
error,
stack,
description: instance.description,
runtime: watch.elapsed,
errorMark: benchmarkError,
);
}
},
((error, stack) {
// Safequard: Errors should be caught in the try block above.
reportError(
error,
stack,
description: instance.description,
runtime: watch.elapsed,
errorMark: benchmarkError,
);
}),
);
}
My intention would be to write the stats to a csv (or json) file in order to be able to store them and process them later (as charts etc). I think I am happy with the info that is provided on the print emitter, it's just I don't want to have to parse that if there could be a way to get the raw data in the first place.
I published the changes to the branch custom-emitter
. To try this version out, you could add this dependency:
dev_dependencies:
# benchmark_runner: ^0.1.4
benchmark_runner:
git:
url: https://github.com/simphotonics/benchmark_runner.git
ref: custom-emitter # branch name
Thanks for that! That approach works just fine for my purposes, with the exception I'd need it on the async version.
Thanks for the feedback. Just to let you know, I've adapted the async version as well. Since v0.2.0 will introduce a breaking change anyway, I rewrote the executable benchmark_runner
as well. It now has the sub-commands
report
:
$ dart run benchmark_runner report <searchDirectory|file>
and export
:
$ dart run benchmark_runner export --extension=cvs --outputDir=out <searchDirectory|file>
After adding some tests I plan to merge the branches and publish v0.2.0 to pub.dev.
Published version 1.0.0. Closing this issue for now. If something is not working as expected please file a new request.
Is there a way to have a custom emitter so that I could write the scores out to a different format other than having them printed to the console?