Open starwing opened 10 years ago
Test case: Something small (but as close to a real example as possible) that triggers bad behavior on your end. If you can make example that does all the above, that'd be great.
Font cache: I think we should start with a moderate cache size, i.e. 512x512, and allow it to example up to, say 2048x2048, and remove the cache size from the init function. Take a look at the fontstast example on how to double the cache size when it gets full: https://github.com/memononen/fontstash/blob/master/example/error.c Note that it is safe to enlarge the cache size as per the example. If you reset the cache (which should happen if the cache gets really big), then you should flush the current render queue (as in frameEnd).
SDF: My plan was to add SDF support so that when you set certain font size, that font size will be used to bake a glyph in the font cache. Then if you use scaling, we'll use the SDF based scaling so that regular text will look crisp and scaled (using transform) text will look still ok, but not perfect.
Currently the font size in nvgText() is calculated like this:
float scale = nvg__getFontScale(state) * ctx->devicePxRatio;
where nvg__getFontScale()
uses the current transform to scale the text. With SDF fonts it would be just:
float scale = ctx->devicePxRatio;
The SDF would be calculated by fontstash, I have not yet added this code in there. I have busy weekend and week coming, so I cannot promise anything yet.
My rough plan was to add FONS_RENDER_SDF
to FONSflags
, and that would tell the system to make SDF. I made this code in preparation for AA to SDF conversion: https://github.com/memononen/sdf
In addition there should be something like fonsSetGlyphPadding()
which tells how much extra space to leave around the glyphs. NanoVG should set that value based on font blur, rounded up to nearest 6 (or so), which would allow small blur without needing to rebake.
That is, font blur can be calculated using SDF too. Usually the AA from distance field is calculated so that you smootstep from 0.5-m to 0.5+m, where m is the size of pixel in texels (larger scale, smaller value) and 0.5 marks the zero contour of the font. If you scale m, you get more blurry result (you may need to apply a curve to that, though).
The SDF stuff looks great - large fonts are something I'd love to see too.
One of the nice things in Microsoft's Build presentations is a new font format that contains coloured font glyphs. They use it to render icons, etc. that look good at any resolution.
I've been wondering about the font blur stuff - since I have a project where rendering text glyphs black-on-white isn't as nice as the surrounding windows (I understand the reason for this - cleartype, etc.). But If the nanovg fonts are already rendered using alpha blended pixels at the edges (my expectation of how that works currently), what does the font blur add on top of that?
Blur is there to be used for drop shadow and such.
Interesting article about Android's font renderer...
https://medium.com/@romainguy/androids-font-renderer-c368bbde87d9
Sent from my iPad
On 13 Jun 2014, at 10:21, Xavier Wang notifications@github.com wrote:
This issue is used to discussing and note the implement things mentions at #111.
the things to do:
auto reset font cache when overflow (optionally add a API to explicit do that) if after reset, the string is still can not render, enlarge the cache. using shader to make font stroke/blur. implement texture based SDF (?) I have some argument about SDF. A arc-list based SDF (e.g. [glyphy][http://code.google.com/p/glyphy]), may improve the quality of glyph. but the current soft render is useless. we should extract vector data from font file. but a texture based SDF do not have good quality when up-scaled (still looks very smooth, but distorted).
and, I only find the two-pass blur algorithm for shader. So I don't know how to made a single-pass one, and, how to apply radius to Gaussia Blur, or we just using box blur? So I need some help here :(
when using SDF (whether arc-list based or texture based), the text stroke is much easier. So we can write a text stroke shader for this.
at last, @memononen which kind of Chinese test you want?
a string that contains many Chinese? a blur animate that overflow the cache now? a laregn/lessen animate that overflow the cache now? or something alse? You should using a ttf file that contains Chinese, the smallest one I found is DroidSansFallback.ttf in Android, it still 3mb+. a YaHei font from Microsoft is larger than 10mb.
My plans:
first, implement glyph cache reset. using fontstash standard interface, or do it myself? current implement is DIY, do not setting callback to fontstash. I prefer do it myself, that may do not add extra interface to backend but it will have issues when fontstash found the overflow but you are in a outer loop, e.g. in nvgText then, I implement a software SDF texture calculate function in NanoVG, and apply it after fonsGetTextureData, then add a extra render type and uniforms to shader, to render SDF font. last, I implement the stroke shader to do the text stroke. I still need helps about blur shader. regrads, Xavier Wang.
— Reply to this email directly or view it on GitHub.
@cmaughan Awesome article, thanks for sharing. I did not know about libhwui. Interestingly we're doing most of the optimization already, sans text layout caching. I've been thinking about adding optional Harfbuzz support at some point. I have to take a look at how they handle the full cache case a bit more in detail.
I have made some design. now we have 4 (NVG_MAX_FONTIMAGES
) fontImages
, first it have a 512x512(NVG_INIT_FONTIMAGE_SIZE
) texture for altas, but if altas overflowed, the altas will reset, and a new 1024x1024 texture will alloced and using for new glyphs. after the nvgEndFrame
, the new texture will swap with the old one (the first one). and if the 1024x1024 texture is not enough, a new 2048x2048 texture is alloced (up to 4096x4096).
if the new slot in fontImages already has a texture, it will reuse if it's size >= current one. or it will enlarge to new size. after delete context, all these fontImage will be deleted.
this solved some issue:
a new wrapper function nvg__textIterNext
to do the magic.
the link from @cmaughan is useful.
has some comments to this method?
After this implement, I will made the first pull request, and next plan is to implement the SDF algorithm.
I planed to add new interface nvgTextQuality
to set the actually size of font, and nvgTextSize
is scaled from SDF. so if the nvgTextQuality
is smaller than nvgTextSize
, the quality may bad.
after if a new quality is set, old quality SDF image will not delete, it still used when the TextSize is smaller than the quality. but after the altas is full, it will deleted, and only the biggest one can hold (but is re-rendered).
I have pushed the first goal at starwing/nanovg, so hopes @memononen to comment it, Thanks a lot!
this is the test: https://gist.github.com/starwing/b3774df16628a77abfb2
you can see it used two text altas to swap. uncomment the printf statement in nvg__allocTextAltas
to see how nanovg.c swap the buffers.
(I'm a really busy this week, sorry did not have time to take a look at the code yet, but I will! Also, thanks for the example, I'll try that too)
You can call fonsExpandAtlas()
mid frame, and it will do all the right things to example the glyph cache. Try this example from fontstash: https://github.com/memononen/fontstash/blob/master/example/error.c#L51
I wonder what the multiple textures approach gives on top of that? I have not figured out yet what would be the best thing to do, if the atlas get's so big that it cannot be expanded in the middle of the frame. One option would be to check at nvgEndFrame
if the atlas is almost full and then reset it. It has pretty nasty worst case behavior. Another option would be to call flush
when the atlas is full and cannot be expanded anymore, but that can cause some odd behavior. I think the current implementation of the callback logic in fontstash does not like reset being called in the callback, but it should be possible to improve that.
I think the default behavior should be that we always try to rasterize fonts to match their on-screen pixel size, but deviate from that when the performance will be stake. As for nvgTextQuality
i suggest that it has two params. One is the behavior and another is a value. The default behavior should be nvgTextQuality(NVG_PIXEL_PERFECT, 0)
which bakes glyphs based on current font size and pixelRation, but you can set it to nvgTextQuality(NVG_FIXED_SIZE, 20)
if you want to in which case we use the set size and maybe pixelRatio to bake the glyphs just once character. What do you think about that? If we can avoid the second behavior, I think we should, to keep things simple.
Thanks for comment! It's up to you to decide when to look into it :)
Currently you implement is not using any callback in fontstash. I think it's good, because the fact nanovg_gl backend is buffered, So using callback may cause issues. e.g. you flushed the text, and you think you can safely modify the text texture, but the fact is the drawing is not immediately, but just buffered! So in fact you can not modify text texture now, but fontstash can never know when the buffered drawing operations are really flushed.
In this case, we can only alloc a new texture, instead of free the old one, that's why I used multiple texture -- when you need resize texture, you should keep the old one for previous buffered texts drawing, and using the new one for subsequent drawings. So, in a buffering draw model, you really need two or more text texture (i.e. keep all old ones) if you want expand altas.
maybe there is a way to resize texture and keep it's data, but currently NanoVG has not this interface. maybe NVGparams.renderResizeTexture
? if we have this then maybe the multiple texture and texture switch can be avoid.
@starwing I had time to take a look at your code. I think it is good approach. I left few comments in there.
I have made changes and force pushed, please comment again and see whether other need changed. Thnak you :)
@memononen I have rebased and force pushed, now I think the code is ready to merge, does a pull request needed now?
This issue is used to discussing and note the implement things mentions at #111.
the things to do:
I have some argument about SDF. A arc-list based SDF (e.g. [glyphy][http://code.google.com/p/glyphy]), may improve the quality of glyph. but the current soft render is useless. we should extract vector data from font file. but a texture based SDF do not have good quality when up-scaled (still looks very smooth, but distorted).
and, I only find the two-pass blur algorithm for shader. So I don't know how to made a single-pass one, and, how to apply radius to Gaussia Blur, or we just using box blur? So I need some help here :(
when using SDF (whether arc-list based or texture based), the text stroke is much easier. So we can write a text stroke shader for this.
at last, @memononen which kind of Chinese test you want?
You should using a ttf file that contains Chinese, the smallest one I found is
DroidSansFallback.ttf
in Android, it still 3mb+. a YaHei font from Microsoft is larger than 10mb.My plans:
nvgText
fonsGetTextureData
, then add a extra render type and uniforms to shader, to render SDF font.regrads, Xavier Wang.