Open alice-mkh opened 3 years ago
Not fixed in 0.1.4.
The canvas is now switching to a GtkPicture and displaying a paintable while doing the zoom gesture. Its fast enough for me with integrated graphics (X1 Yoga ). Can you switch to the resvg renderer in the developer settings (hidden inside the appmenu, enable developer mode to access it) and tell me if it is still too slow? This one uses TextureNode's, which should be much faster when zooming but has other disadvantages. Maybe I should convert it to a texture regardless of the renderer while doing the gesture..
Both seem slow tbh. I do see some pixelation but it's definitely doing extra work beyond drawing a texture - check Obfuscate which does exactly that. The app currently limits zoom to (0.1, 2.0) but if you remove the check in code, it manages 60 fps with any zoom level just fine here (Intel+4K screen)
1002b43422d8837b9abe779becb71022a51393a9 and f9e23c71849cec445967cf3a5919aa382de7f908 should improve gesture- and scroll zooming performance significantly, I'm getting 80 - 100fps. Let me know if it works for you too on 4K if you get around to test it or next release :)
v0.2.1 is implementing asynchronous, threaded rendering, so I believe this should be solved. Can you confirm @Exalm ?
It's better than before, but it doesn't feel async here. The app visibly stutters when it's redrawing the canvas.
alright why do I make my life so difficult. Now that everything is a texturenode all that is needed is to "overlay" a temporary zoom factor in the snapshot func while doing the zooming, like you mentioned its done in obfuscate. Patch coming soon.
Any news on this?
If that still won't make it under 1ms margin, then let me suggest giving a try to Blend2D as the fastest existing vector backend I've ever seen.
Thats an interesting video. Currently 1ms is not really possible, as not even the polling rate of drawing tablets allows for that. But there is still some room for improvement, especially how lines are currently generated / interpolated. Currently, catmull-rom splines are generated with the individual input positions as control points. That means they will generate a smooth looking line, but a stroke element will always lag behind at least 1 data point in comparison to straight lines. So there is improvements in the algorithms.
There is of course the rendering speed as well. Currently, rnote generates SVG's for basically every visible element from strokes, background and the different pens and renders them with librsvg
or resvg
into textures, which are picked up by GTKs scene graph. Blend2d is not really an option (a c++ library) (actually, there are bindings but the last release is three years ago) but there is a 2d graphics crate in Rust called piet which looks very promising, since it is a abstraction over different backends, which would fit the requirements quite nicely (has a GPU-backend for fast rendering, a cairo backend for whenever cairo is need, for example when printing, and a SVG backend to export to SVG's.)
Though that topic is a slightly different issue and not really related to stuttering when zooming with many strokes, so I think opening another issue or discussion for it would be a good idea.
Thats an interesting video. Currently 1ms is not really possible, as not even the polling rate of drawing tablets allows for that.
Oh, I didn't know that. That's interesting. It actually surprises me as I have some experience with USB-connected input devices and even over USB 1.1 you can go close to 1ms (because USB 1.1 itself can run with 1ms delay). With USB 3.x it's much less.
But there is still some room for improvement, especially how lines are currently generated / interpolated. Currently, catmull-rom splines are generated with the individual input positions as control points. That means they will generate a smooth looking line, but a stroke element will always lag behind at least 1 data point in comparison to straight lines. So there is improvements in the algorithms.
Such imprevements sound good. Let's see if they help enough.
There is of course the rendering speed as well. Currently, rnote generates SVG's for basically every visible element from strokes, background and the different pens and renders them with
librsvg
orresvg
into textures, which are picked up by GTKs scene graph. ~Blend2d is not really an option (a c++ library)~ (actually, there are bindings but the last release is three years ago) but there is a 2d graphics crate in Rust called piet which looks very promising, since it is a abstraction over different backends, which would fit the requirements quite nicely (has a GPU-backend for fast rendering, a cairo backend for whenever cairo is need, for example when printing, and a SVG backend to export to SVG's.) Though that topic is a slightly different issue and not really related to stuttering when zooming with many strokes, so I think opening another issue or discussion for it would be a good idea.
Just to clarify, Blend2d didn't have any release yet. It runs for more than a decade and it's currently in beta. But compared to SW industry, Blend2d's beta is much more stable than most major releases of critical SW around us. So just clone the master branch and you're good to go. But if the goals is not performance but portability, then piet might make mare sense (piet's abstractions have some non-negligible weight but only reality would tell if it's OK or not). But let's discuss this elsewhere if it's of any interest :wink:.
Thats an interesting video. Currently 1ms is not really possible, as not even the polling rate of drawing tablets allows for that.
I think that that's the Mutter problem that should be fixed in Gnome 42. I don't know how you test the polling rate but maybe on different window manager it would be higher in the meantime
Im excited to see if there is a difference in Gnome 42 as well. But drawing tablets have I think ~200Hz, so 5ms at best. I measured the response time between stroke inputs in rnote at one time and it already came pretty close with 5 - 15ms ( although the rendering is done in a different thread, so that may be a bit slower). @dumblob The issue isn't the stability, but I am preferring rust-native libraries (which are ideally actively developed and maintained, which the blend2d bindings are not). This is a long stretch, but a library with different backends would allow for easy porting if someone (probably not me!) would want to write a port to another platform and reuse the sheet which is already pretty self contained. But I am interested, which issues did you encounter while using piet?
But I am interested, which issues did you encounter while using piet?
It mostly boils down to the fact that Blend2D does not use any pipelines and instead tries to fully interleave everything using JIT. piet
partially assumes the backends use pipelining. In practice this adds some (mostly small I guess) overhead. There is also this immediate vs retained mode but piet
doesn't seem to be trying to mandate much how the backend shall work from this point of view.
So generally piet
is fine IMHO if you insist using native (i.e. Rust) libraries.
It should probably scale the canvas itself without redrawing the picture until the gesture is done instead of redrawing the strokes in real time (or redrawing it async).