Chlumsky / msdfgen

Multi-channel signed distance field generator
MIT License
3.91k stars 404 forks source link

Complex paths advice requested #109

Closed thygrrr closed 3 years ago

thygrrr commented 4 years ago

Hey Chlumsky, I really like the results you have for fonts! Great stuff! I also read your thesis and think it's a great extension to the state of the art of multi channel distance field generation. Kudos for that research and good writeup!

For my SDF use (improving rendering of a few hundred legacy texture assets for a game I'm working on), I'm looking for ways to generate detailed distance fields of fairly finely structured lines. (see attached .zip file) This is how I found your thesis. 👍

My sample SVG file contains just one path, but the resulting outputs from your implementation fall a bit short. :-) I'm not even sure what it does, but I think it only looks at a single sub-section of the SVG, even though it contains just one path.

The command is with or without sdf or default mode, msdfgen.exe -svg lines-vector.svg -size 256 256

My current solution is to:

  1. perform some preprocessing on the textures (usually 1024² or 2048² textures, some 4096²)
  2. vectorize them through potrace (lightning fast :))
  3. rasterize them again, at same resolution but now cleanly anti-aliased
  4. process them using Anti-Aliased Euclidean Distance Transform, resulting in a quite usable, organic looking line art texture

The output however has a much lower quality than what a multi-channel map produces! (and a bunch of other quality and performance issues) So by going straight to a SDF from the SVG, I could cut out an unnecessary rasterization step and avoid using an algorithm with a relative high margin of error.

Question: What would you suggest I can do to my SVG paths to apply your algorithm to large images like this, with an output size anywhere between 256x256 (preferred) to 2048x2048 (zero gain over normal texture, but much crisper rendering). I noticed your implementation makes limited use of parallel processing; I think this is something I could easily add given that the brunt of the work appears to be the per-pixel sampling of distances.

svg-and-outputs.zip

output - default output - sdf lines

thygrrr commented 4 years ago

I only saw it through this post (thanks, github attachment layout and ordering!) - but it appears to be indeed a scaling issue. Scaling this to 0.125 seems to work, but the distances are very short. I can parallelize multiple processing jobs at the same time but this is easily a minute or two of processing per image.

The default maximum pixel distance appears to be very low, sadly. Why? What's the "scale" that uses and tries to express?

output output - sdf

My current test command is: msdfgen.exe <sdf or default> -svg lines-vector.svg -size 256 256 -scale 0.125 -pxrange 16

This is kind of where this implementation falters, though output output

Still need to find out what's up with the scale, the images are still cut off.

thygrrr commented 4 years ago

-autoframe does something magical and it seems to work with good accuracy on my images. Why? (if I tell a software to convert the svg to png, it creates a 2048² texture, i.e. 8x larger than the 256² distance output - yet 0.125 scale doesn't work)

That said, the 256² map (just sdf because that's what my shader currently deasl with) doesn't map well to the UVs of the model anymore, so some offset was introduced. It's also nowhere as crisp.

msdfgen.exe sdf -svg lines-vector.svg -size 1024 1024 -autoframe -pxrange 32

This command takes "a while" to run, to say the least. I can generate a AAEDT texture in ~30 seconds at 4x the size with a really inefficient algorithm, also not parallelized, so I wonder where the main workload happens in your implementation. 😎

Do you do something else than measure the distance to each line segment from each pixel to get the final distance? I think there are some bounding volume hierarchy optimizations that could cut a lot of the work away at that stage.

thygrrr commented 4 years ago

image

image

Getting there. The 2nd image is the output of the above command, the 1st is my AAEDT reference. Any optimization tips very welcome. I'm worried the most about properly centering the image.

Chlumsky commented 4 years ago

This thread is pretty packed so I will try to address the questions or misconceptions one by one.

Firstly, how to position / scale. Your SVG's canvas is 2730.666² pixels (2048² pt, but msdfgen uses pixels). Therefore, to get the whole canvas in the SDF, you need to set scale to output size divided by 2730.666. For example, for a 1024x1024 output, it would be 0.375. Of course, if you want distances slightly around the shape as well, you need to add padding by tweaking the scale and translation accordingly.

... with an output size anywhere between 256x256 (preferred) ...

This won't work, because your design has a lot of very thin lines. The SDF requires at least one texel center to always fall within the line, so the thinnest line should be no thinner than the span of two SDF texels across, otherwise it will break up. In theory, this can be circumvented by a different coloring strategy (this is mentioned at the very end of my thesis) but this hasn't been implemented.

I noticed your implementation makes limited use of parallel processing; I think this is something I could easily add given that the brunt of the work appears to be the per-pixel sampling of distances.

You are right in the second part, however if you compile the program with OpenMP (add the compiler flag and define MSDFGEN_USE_OPENMP), this optimization takes effect and parallel processing is used to its peak potential. In the official release this is disabled to eliminate any and all external dependencies.

The default maximum pixel distance appears to be very low, sadly. Why?

The default distance range is the minimum value required for proper basic rendering without any additional effects. It also allows for anti-aliasing at 100% scale and greater. You are welcome to set a different value of your liking.

-autoframe does something magical and it seems to work with good accuracy on my images.

It doesn't do anything magical. It simply computes the bounding box and sets the scale and translation so that it fits the output. You may want to use the -printmetrics argument to see what the bounding box is and what scale and translation has been set.

You are right that the implementation is very bad at dealing with too many edges such as in your case, which has been a weakness from the beginning. The complexity is indeed the number of input edges times the number of output pixels (lower and upper bound). It usually isn't a big deal though, especially for the typical use case of single characters of fonts. Unfortunately I haven't been able to come up with a good optimization to solve this as of yet.

I think there are some bounding volume hierarchy optimizations that could cut a lot of the work away at that stage.

Yes, for the sdf mode, something like this would be quite easy to add, but all the other ones use the pseudo-distance metric, which cannot be bounded because it involves infinite continuations of each edge at both of its ends. As I said, I haven't been able to come up with a good solution for this yet (although I haven't been trying too hard either).

I'm worried the most about properly centering the image.

Do not use -autoframe, stick to the scale formula I wrote at the beginning and use zero translation.

thygrrr commented 4 years ago

Thank you for the detailed reply. Indeed I realized in other tests that I can't get a 256² SDF to work, even using floating point textures that use two channels at a 120° heuristic offset, for this particular application. (it's mostly not about memory or fill rate improvement, anyway, so I can live with this)

Thank you for pointing out the MSDFGEN_USE_OPENMP symbol.

About my intermediate SVGs, the figure of 2731 occasionally came up and I didn't realize it was points vs. pixels, I guess I was blind. :) Thank you so much! I'll look into potrace's manpage/API whether I can make this use a different metric in its output, or work with the formula you suggested.

The complexity is indeed the number of input edges times the number of output pixels (lower and upper bound). It usually isn't a big deal though, especially for the typical use case of single characters of fonts. Unfortunately I haven't been able to come up with a good optimization to solve this as of yet.

Ah, this clarification helps, I was unaware the pseudo-distance was actually harder to calculate. I'm still unfamiliar with the problem domain, yet those infinite continuations of edges and to work along either side of them do make me think of possible applications of Binary Space Partitioning rather than AABB bounding volume hierarchies.

thygrrr commented 3 years ago

Update for posterity: The newest version 1.8 is lightning fast (under 3 seconds for single-channel, under 10 for multi-channel).

Now I need to write a small SVG tool that will auto-combine the paths for me, Inkscape gets it (mostly) right, but I need to ensure it can run automatically and unattended as part of an asset pipeline.