bevyengine / bevy

A refreshingly simple data-driven game engine built in Rust
https://bevyengine.org
Apache License 2.0
35.18k stars 3.47k forks source link

Clean benchmarks storage type code duplication #5161

Open Vrixyz opened 2 years ago

Vrixyz commented 2 years ago

Objective

In ECS benchmarks, code is always duplicated to account for Table and Sparse storage types.

Duplication leads to difficulty in:

Work In Progress

Possible solutions

Generic function to help avoid the duplicated code.

See implementation ```rs fn generic_bench( bench_group: &mut BenchGroup, mut benches: Vec>, bench_parameters: P, ) { for b in &mut benches { b(bench_group, bench_parameters); } } fn all_added_detection_generic(group: &mut BenchGroup, entity_count: u32) { group.bench_function( format!("{}_entities_{}", entity_count, std::any::type_name::()), |bencher| { bencher.iter_batched( || setup::(entity_count), |mut world| { let mut count = 0; let mut query = world.query_filtered::>(); for entity in query.iter(&world) { black_box(entity); count += 1; } assert_eq!(entity_count, count); }, criterion::BatchSize::LargeInput, ); }, ); } fn all_added_detection(criterion: &mut Criterion) { let mut group = criterion.benchmark_group("all_added_detection"); group.warm_up_time(std::time::Duration::from_millis(500)); group.measurement_time(std::time::Duration::from_secs(4)); for entity_count in RANGE_ENTITIES_TO_BENCH_COUNT.map(|i| i * 10_000) { generic_bench( &mut group, vec![ Box::new(all_added_detection_generic::), Box::new(all_added_detection_generic::), ], entity_count, ); } } ```

Declarative macro

It was my first attempt but macros are harder to read for most people, less standard and this solution would need more work to be as flexible.

See implementation ```rs #[macro_export] macro_rules! bevy_bench { ( $( $bench:ident, $harness:ident),+ ) => { $( fn $harness(criterion: &mut Criterion) { let mut group: criterion::BenchmarkGroup = criterion.benchmark_group(stringify!($bench)); group.warm_up_time(std::time::Duration::from_millis(500)); group.measurement_time(std::time::Duration::from_secs(4)); for entity_count in RANGE_ENTITIES_TO_BENCH_COUNT.map(|i| i * 10_000) { $bench::
(&mut group, entity_count); $bench::(&mut group, entity_count); } group.finish(); } )+ }; } fn all_added_detection_generic(group: &mut BenchGroup, entity_count: u32) { group.bench_function( format!("{}_entities_{}", entity_count, std::any::type_name::()), |bencher| { bencher.iter_batched( || setup::(entity_count), |mut world| { let mut count = 0; let mut query = world.query_filtered::>(); for entity in query.iter(&world) { black_box(entity); count += 1; } assert_eq!(entity_count, count); }, criterion::BatchSize::LargeInput, ); }, ); } bevy_bench!(all_added_detection_generic, all_added_detection); ```

procedural macro

As those are more complex I think another solution is better until proven otherwise.

Vrixyz commented 2 years ago

If we reduce code duplication in similar functions (sparse and table), we could merge these files together and it would help making the file organisation more readable. Their naming is also currently not coherent, which doesn't help with grouping them mentally. image