Open davidtaylorhq opened 3 months ago
I took a stab at reverting some of the extra assertions & iterator changes introduced in 64eb186f (see https://github.com/davidtaylorhq/glimmer-vm/pull/1). It does improve things by a couple of percentage points, but it's nothing compared to the 30-40% regression shown above 😢
It looks like a further rendering-speed regression has been released as part of Ember 5.10 😭
(https://emberperf.discourse.org)
Edit: although it looks like glimmer-vm was not bumped between Ember 5.9 and Ember 5.10, so I guess this must be caused by a change in Ember itself
For reference, the most recent upgrade pr (tho, this shipped in ember 5.9):
We went from 0.87.1 to 0.92.0
Likely suspects:
Also, a few deprecations were added (to the AST utils). I wonder how much that code being present (extra branches, etc) contributes to slowing down - especially since ember has a bunch of extra transformations it uses
@davidtaylorhq are those benches all with classic builds? I'm curious how using vite could affect the resulting score. As i look through the changelog in ember, the only things that isn't add deprecation, or delete old code, are changes supporting vite's strictness.
@NullVoxPopuli I opened an ember.js issue at https://github.com/emberjs/ember.js/issues/20719 with more details on the most recent regression. It looks like the culprit is https://github.com/emberjs/ember.js/commit/53b2de8e3869da8b9ed66dae981496b44b81f057.
emberperf uses classic builds, and is pretty dependent on AMD resolution. So I think it'll need some pretty significant refactoring to work under Vite/Embroider.
Also, a few deprecations were added (to the AST utils). I wonder how much that code being present (extra branches, etc) contributes to slowing down - especially since ember has a bunch of extra transformations it uses
It doesn’t unless discourse/the test suite used here loads the template compiler at runtime which is atypical
Yeah, both Discourse and the emberperf test suite compile templates in advance :+1:
emberperf does load the template compiler into the browser, although at first glance it does it on a completely separate pageload from the one that measures rendering.
I bring it up because there's definitely atypical stuff in there, but so far nothing I can see that would skew the results.
This issue might be releated - such a massive bump in bundle size could lead to a big slowdown in performance.
Mixed news so far
hz
are way off from the original benchmark, so I'm not entirely sure if I am measuring the beginning and end of a test-run correctly. I'm marking the end of a test-run via schedule('afterRender'...)
, which might be sufficient? I did try requestAnimationFrame and requestIdleCallback, but those two timings are way slower than our event loop can run. (requestIdleCallback was about 1/2 my monitor's refresh rate, and requestAnimationFrame (as you might expect) slowed the hz to my monitor's refresh rate).
I did do bad science here as I changed how the measuring is done with the rendering benches. I'm using tinybench for all benches now, which is great, but things are running faster than expected it seems like. If anyone wants to take a poke at the benchmarking code for rendering, that's here, and feedback would be most welcome.I've published the two apps here:
Second update, after fixing some things:
renderSettled
to determine when rendering is done rather than runloop's schedule 'afterRender')hz
/ operations per second values make much more sense now.We have a similar degradation in both development
and production
modes:
Just added canary / 5.12 (these are production results)
Pixel 6a
Chrome
FireFox
Did a memory allocation timeline and the graph looked like this:
which aligns with the work from @bendemboski in https://github.com/glimmerjs/glimmer-vm/pull/1440 which was released in 5.7: https://github.com/emberjs/ember.js/releases/tag/v5.7.0
Nice work, @bendemboski !
So far:
I uploaded a performance profile captured with firefox -- ya'll can inspect and poke about here:
Of note, these are the top timings:
With no other changes, if I just use glimmer's prod assets as dev we get a nice speed boost in both production and development environments in the ember apps
I've added 5.11 and 6.0-alpha.1 And did a 6x CPU slowdown to try to account for random machine variance https://ember-performance-testing-prod.pages.dev/report?benchmarks=%5B%22Render%20complex%20html%20(%40glimmer%2Fcomponent)%22%5D&clear=0&emberVersions=%5B%223.28%22%2C%224.0%22%2C%225.4%22%2C%225.5%22%2C%225.6%22%2C%225.7%22%2C%225.8%22%2C%225.9%22%2C%225.10%22%2C%225.11%22%2C%22ember-canary%22%2C%22ember-canary-custom%22%5D&timePerTest=3000
As you can see, there is still some variance, as there isn't really a lot that changed
https://github.com/emberjs/ember.js/compare/3dfb8a4...85a4f298f67ff70395cf7f9103682335162e0606
(but some logic around EXTEND_PROTOTYPES.Array did change).
3dfb8a4 is the actual v6 alpha.1 sha
I added another set of apps for comparing classic production builds. https://ember-performance-testing-prod-classic.pages.dev/report?benchmarks=%5B%22Render%20complex%20html%20(%40glimmer%2Fcomponent)%22%5D&clear=0&emberVersions=%5B%223.28%22%2C%224.0%22%2C%225.4%22%2C%225.5%22%2C%225.6%22%2C%225.7%22%2C%225.8%22%2C%225.9%22%2C%225.10%22%2C%225.11%22%2C%22ember-canary%22%5D&timePerTest=500
On my personal laptop, comparing with embroider:
embroider:
classic:
Note: it seems it's hard to control noise my laptop
Broccoli
Embroider (w/ 20 (I think) x CPU slowdown because I have a lot of machine "noise")
From this PR: https://github.com/glimmerjs/glimmer-vm/pull/1606
In Discourse, and in Emberperf, we saw a fairly significant rendering-performance hit as part of the Ember 5.5 -> 5.6 bump:
Ember 5.6 included a bump of glimmer-vm from 0.84.3 to 0.85.1 (https://github.com/emberjs/ember.js/pull/20561)
Unfortunately 0.84.3 -> 0.85.1 include a lot of structural changes in glimmer-vm, much of which was done without glimmer-vm's own performance testing in working order.
I was able to boot the glimmer-vm benchmark app on a handful of old commits, and run tachometer on them to compare the 'render'
performance.measure
metric.[^1]: with 56ddfa cherry picked on top to make the benchmark app work
These numbers are clearly going in the wrong direction. Although it is also worth mentioning: the benchmark app itself underwent a bunch of refactoring across these commits... so it might not be a perfect comparison.
I would love to be able to bisect into specific commits to identify what caused the regressions. Unfortunately, on all the intermediate commits I've tried, I've been unable to get the benchmark app to boot because of various import/dependency/package-json errors. It seems the '
perf.yml
' GitHub CI job was disabled for much of this time, so I assume this was a known problem on these commits, and not a problem with my local setup.So... I don't really know where that leaves us. Does anyone have any pointers for what else we can do to isolate the source of the regression(s)?