Closed trusktr closed 4 years ago
InstancedText will better answer this specific need of updating short texts every frame.
For the example you provided, the performance would be better with MSDFText, since the bottleneck here is geometry creation at every frame (the GPU does not really matters here, it's CPU-bound). With MSDFText, we only create planes geometries, glyphs are visible on a texture on them, and a shader computes the right alpha.
InstancedText will not be a perfect solution either, it should only be used with time counters basically.
I think that we should change the API so that MSDFText becomes Text (the default), and Text would become GeometryText (or any name, open for suggestions), since using it would only answer the corner-case use of a custom font material. MSDFText is a huge performance gain, even when not updating text every frame.
I'm getting back in track today and will work on InstancedText, it was un-covid-lock-down this week in France, so there was a lot to do.
Here's the same thing, but with MSDFFont:
https://codepen.io/trusktr/pen/d237f115802d793bbd7bb04ebd9b6e52
This time the frames take 16 to 20 ms. Getting better!
On that one, the font is white despite passing a black fontMaterial. It's because of the png texture?
I think that we should change the API so that MSDFText becomes Text (the default)
I think just keeping them both with unique names, GeometryText
and MSDFText
, would be great so that there is a distinction. Perhaps Text
could be an alias for MSDFText
. But it would also be easy to alias it in the code:
import {MSDFText as Text} from 'three-mesh-ui'
EDIT: Would GeometryText
use the new instancing? Or would there also be a new InstancedText
class? Would there be one InstancedMesh
per used character for a given font?
This one is interesting!
https://codepen.io/trusktr/pen/01e92663383ead15983aa15164960e12
It has a few issues:
Despite the texture resizing on each frame (inside Three.js, see console warnings), each frame takes only 2 to 2.5 ms! This one is very fast.
Of course the problem with this approach is, that when the content moves closer to the camera, the texture will look fuzzy and not sharp. three-mesh-ui solves this problem.
Maybe we should have a CanvasText
class too. It can be good for certain cases where we know the text won't move towards the camera, or we when don't mind the fuzziness. Example use cases:
There can definitely be some VR experiences where the player won't move towards the object with the text.
This is interesting: https://stackoverflow.com/questions/25956272/better-quality-text-in-webgl
From that we could also make a VectorText
class. Demo: http://wdobbie.com/pdf
A guide in the documentation could describe which classes are better to use for which use cases. This is very interesting!
Wow, look at the improved demo: http://wdobbie.com/post/war-and-peace-and-webgl/
What's interesting is that the text is re-rendered on each frame, in the fragment shaders. There isn't anything pre-computed between the font data and the WebGL rendering (f.e. no geometry generated from the font like the current Text
).
On that one, the font is white despite passing a black fontMaterial. It's because of the png texture?
This is a shortcoming due to lack of time 😅 MSDF currently does not need a fontMaterial, it creates a shaderMaterial here : https://github.com/felixmariotto/three-mesh-ui/blob/dba7915fdf576d4ef7b54a1b9e91ffd135e9b0c3/src/components/MSDFText.js#L254 https://github.com/felixmariotto/three-mesh-ui/blob/dba7915fdf576d4ef7b54a1b9e91ffd135e9b0c3/src/components/MSDFText.js#L230 I've seen there is an error when you don't pass a fontMaterial though, this is a bug that must be fixed.
In the fragment shader, that is responsible for computing the alpha according to the texture, the pixel is always rendered white, and only the alpha change. It's happening here : https://github.com/felixmariotto/three-mesh-ui/blob/dba7915fdf576d4ef7b54a1b9e91ffd135e9b0c3/src/components/MSDFText.js#L46
Of course this is temporary, it's totally possible to set the colour and tweak the computed alpha to have a semi-transparent font.
So in order to let the user choose a colour and opacity, we should either :
.set
There is pros and cons for either solution, I'm interested in reading your opinion about this.
I think just keeping them both with unique names, GeometryText and MSDFText, would be great so that there is a distinction. Perhaps Text could be an alias for MSDFText. But it would also be easy to alias it in the code
Yes this sounds good 👍
EDIT: Would GeometryText use the new instancing? Or would there also be a new InstancedText class? Would there be one InstancedMesh per used character for a given font?
Well that was the plan, but it means one draw call per glyph type, so that's why I wrote that it's not a perfect solution... It's really best for numbers, or on the contrary big texts that justify having a 40 glyphs charset and 40 draw calls. For Lume I guess you need to pick one text class and stick to it. In this case I would go for MSDFText, this is the right adaptability/performance balance, and it can still be optimised a bit.
Actually I'm even having doubts as to whether InstancedText will be an unnecessary burden.
Wow, look at the improved demo: http://wdobbie.com/post/war-and-peace-and-webgl/
What's interesting is that the text is re-rendered on each frame, in the fragment shaders. There isn't anything pre-computed between the font data and the WebGL rendering (f.e. no geometry generated from the font like the current Text).
This is crazy good resource, thanks for that, I'm reading !
It's really best for numbers, or on the contrary big texts that justify having a 40 glyphs charset and 40 draw calls
Yep, good for larger amounts of text, where the performance will scale with O(n)
performance (until the GPU is maxed out).
Actually I'm even having doubts as to whether InstancedText will be an unnecessary burden.
Maybe worth a try, just to see how it works. Are you already mid-implementation?
At work, I've been using InstancedMesh
all over the place. I have maybe a thousand objects on the screen, spread across 10 or so InstancedMesh
es. It would be very slow without it.
By the way, which InstancedMesh
are you using? I am using @pailhead's three-instanced-mesh more than Three's built-in one. The one in Three lacks some features still. Plus I added per-instance opacity to it: https://github.com/pailhead/three-instanced-mesh/pull/35.
Maybe worth a try, just to see how it works. Are you already mid-implementation?
Kind of, I'm taking a previous project as blueprint: https://github.com/felixmariotto/vr-controller-test
Play it here: https://test-vr-controller.herokuapp.com/
If you can't try in VR, type gameControl.start()
in the console to start the timer
I you want to do it yourself though, with pailhead's instanced mesh, I'm eager to merge your PR !
The main issue is the API actually, I've been struggling to decide how the user should choose a glyph pool, how not to create a new InstancedMesh when .set( content )
, whether to use MSDF font or font geometries, both... Also I like your idea here #12, I think a reorganisation is needed, that may help with implementing instancedMesh smartly.
Just started reading this, but on the topic of text, i have some experiments with the "stencil and cover" or whatever it's called. Basically i extracted the curves from three's json fonts, and used the gpu gems algorithm to render text. It's not perfect but i can zoom in and get continous vector curves with only a fixed number of control points.
I think this is slightly different than the example you posted @trusktr , that one is using signed distance fields?
Except i think i didn't do both types of curves. Would this be interesting if it were another package?
I think this is slightly different than the example you posted @trusktr , that one is using signed distance fields?
If you mean http://wdobbie.com/post/war-and-peace-and-webgl, that one is using vectors in the fragment shaders (but that's about all I know, at a high level). Unless I missed something, it that the text is generated each frame, within the fragment shader, based on the font vector data. Worth studying!
Would this be interesting if it were another package?
It would be interesting anywhere where it can easily be consumed (easy to use). :D
@pailhead this is interesting, in your experience what are the pros and cons compared to signed distance field ?
Would this be interesting if it were another package?
Sure 👍
I think you're still somewhat limited by the resolution in an SDF. If i remember that valve paper from a while ago, you don't exactly get perfect shapes across the board. Evaluating these beziers i think get's you infinite precision sort to speak. I found the code, i just need to clean it up slightly.
Sounds interesting, and I guess the results with this type of font should be better.
Would this be interesting if it were another package?
@pailhead I would definitely be interested in your GPU vector work as well! I originally wanted to use that sort of technique for troika-3d-text when I started it, but was daunted by the complexity of such a thing, and I found that generating SDFs quickly at runtime was much easier and its results are good enough for almost all cases. But I'm still interested in improving it if possible! 😄
As for performance with the MSDF text, it looks like almost all of the time is currently spent in shader program compilation - see profiling graph below. It appears you create a new shader material on every frame, which I suppose would cause that. If you can reuse the material from one update to the other, there's no reason this shouldn't be very fast.
Some other possible optimization ideas based on my experience:
InstancedBufferGeometry
rather than merging a set of BufferGeometry
s may make that simpler since you could just update one or two InstancedBufferAttribute
s rather than all of them. Troika's GlyphsGeometry may give you some ideas there.Hope this is helpful.
Thank you for the time you dedicated to profile this lib and dive in its code 😊
There is definitely some room for optimisation, I will look into your suggestions.
Troika's GlyphsGeometry may give you some ideas there.
Interesting. Could even be worth looking at importing that just for this purpose, if it aligns well as a lower-level primitive within three-mesh-ui.
Could even be worth looking at importing that just for this purpose, if it aligns well as a lower-level primitive
I don't think it would align in that way, it's pretty purpose-built for Troika's particular SDF atlas layout assumptions, and of course the shaders have to match as well. I intended that more as an example of how to use an InstancedBufferGeometry for this purpose.
I do wonder if there's maybe potential to use troika-3d-text's standalone TextMesh as an alternative to the MSDF text implementation. The obvious gotchas to me would be dealing with its asynchronous nature (it does all font processing and text layout in a web worker), and its current lack of mixed inline styles, but otherwise I think it could work and it's got some pretty big advantages. That's a whole other subject though. ;)
TextMesh
Just took a look: I see how TextMesh
uses the (instanced) GlyphGeometry
to lay out the characters (wrapping the words within the specified width, with line height, letter spacing, alignment, etc). Nice!
I made this interesting example:
https://codepen.io/trusktr/pen/11105a14a0707cc73a67ac62ca5ea6e9
It updates the font size and font content every frame. I found that on my Nvidia Quadro P2000, each frame takes about 27ms.
Looking forward to trying the InstancedMesh concept.
Sidenote, It is interesting that in the demo, the WebGL font stays perfectly in position while resizing (the math works out well), while the DOM font jumps up and down while it resizes. It seems the native browser font rendering rounds the font positioning to the nearest pixel (aliasing).