abeimler / ecs_benchmark

Benchmarks of common ECS (Entity-Component-System)-Frameworks in C++ (or C)
MIT License
220 stars 14 forks source link

The log scaling of the graphs is distorting differences between libraries #20

Closed clevijoki closed 8 months ago

clevijoki commented 8 months ago

After removing the std::fmt locally, I can see that one lib takes 120ms to update and another takes 11ms to update in the 1-2M cateogry. But because the log scaling is trying to put them all on the same graph, that system which is 10x slower is only displayed as about 15% slower.

Perhaps the speeds can be normalized by dividing by the entity count instead.

It's also not clear what a 1-2M category is since the bencharks just measure 1M and 2M separately

abeimler commented 8 months ago

I agree the Graph can a bit misleading, I'm using the plotly histograms and pandas cut to group some values.

gen_benchmark_report:

                # Define custom groups
                custom_groups = {
                    0: '0-128',
                    128: '128-1024',
                    1024: '1024-8192',
                    8192: '8192-16384',
                    16384: '16384-65536',
                    65536: '65536-131072',
                    131072: '131072-524288',
                    1048576: '1M-2M',
                }
                # Create a new column 'EntityGroup' based on the custom groups
                results['_plot_data_histogram'][ek]['data']['EntityGroup'] = pd.cut(results['_plot_data_histogram'][ek]['data']['Entities'], bins=list(custom_groups.keys()) + [float('inf')], labels=list(custom_groups.values()))

I can make more groups, but then the graphs is getting bigger.

Available Measurements and Grouping for X Entities:

"0-128"

"128-1024"

"1024-8192"

"8192-16384"

"16384-65536"

"65536-131072"

"131072-524288"

"1M-2M"

Maybe I should just cherry pick the values without grouping much. The tables below also just show the picked values. ... I can't remember why I grouped them :smile:

I'm very open for suggestions how to use panda with python and plotly the right way.

Thx for reviewing the graphs and results.

clevijoki commented 8 months ago

Here is the data plotted with lines, showing update cost per entity, to show absolute differences

SystemsUpdateMixedEntities

Here is the same thing but also with a log y scaling

SystemsUpdateMixedEntities_log_y

I think both are useful

It was a royal pain to get this to run btw. I am not familiar with plotly but I had to sync very specific versions of their dependencies so it seems a bit fragile.

abeimler commented 8 months ago

yea I had some trouble too, switching from pip to pipx and running the script with pipx run --spec ./scripts/gen-benchmark-report gen-benchmark-report -c ./plot.config.json --reports-dir=./reports/ gen-plots ./reports/entityx.json ./reports/entt.json .... (this script is very old, maybe I need to update some of the dependencies ^^)

I used line graphs back then, https://github.com/abeimler/ecs_benchmark/tree/5.0.0. But switch to histograms. Your looks better, but for me personalty it's still hard to read.

abeimler commented 8 months ago

WIP:

I'm trying to group the histogram better (while with the fix of https://github.com/abeimler/ecs_benchmark/issues/19): image

                results['_plot_data_histogram'][ek]['x'] = 'Entities'
                results['_plot_data_histogram'][ek]['y'] = 'Time (us)'
                results['_plot_data_histogram'][ek]['color'] = 'Framework'
                results['_plot_data_histogram'][ek]['barmode'] = 'group'
                results['_plot_data_histogram'][ek]['labels'] = {'Time (us)': 'Time (us)', 'Entities': 'Entities'}

                # Define custom groups
                custom_groups = {
                    8: '[0, 64]',
                    64: '[64, 256]',
                    256: '[256, 1024]',
                    1024: '[1024, 8192]',
                    8192: '[8192, 16384]',
                    16384: '[16k, 65k]',
                    65536: '[65k, 131k]',
                    131072: '[131k, 524k]',
                    1048576: '1M',
                    2097152: '2M',
                }
                # Create a new column 'EntityGroup' based on the custom groups
                results['_plot_data_histogram'][ek]['data']['EntityGroup'] = pd.cut(results['_plot_data_histogram'][ek]['data']['Entities'], bins=list(custom_groups.keys()) + [float('inf')], labels=list(custom_groups.values()))
                results['_plot_data_histogram'][ek]['data_frame'] = pd.DataFrame(results['_plot_data_histogram'][ek]['data'])
clevijoki commented 8 months ago

I would still plot the cost per entity and not the total cost. You can't really compare the costs updating 1000 entities and 1m entities otherwise because they represent two different things. This causes additional distortion along the Y because you're trying to put measurements of different things in the same image. This is also why grouping these only tells you less.

In your recent image is it clear that for the 1m bucket size the red bar is 6.6x slower than the blue bar? It's misleading to make it look like all of them operate in roughly the same range.

This is the bar graph cost normalized per entity SystemsUpdateMixedEntities

I think the line graph is more useful as it shows trends, like it's more obvious that there is some mystery performance degradation around the 300k entities count that probably has something to do with cache sizes that is present in all libs

clevijoki commented 8 months ago

SystemsUpdateMixedEntities

Here's the same data with no log Y scaling showing the absolute perf differences.

clevijoki commented 8 months ago

There's also no rule that says you can only have one graph. Having the same data presented in multiple ways, each showing different things (log Y vs absolute, bar, line, per entity etc) all reveal different things.