w3c / fxtf-drafts

Mirror of https://hg.fxtf.org/drafts
https://drafts.fxtf.org/
Other
69 stars 49 forks source link

[filter-effects-2] Require image re-sampling for feDisplacement #21

Open AmeliaBR opened 8 years ago

AmeliaBR commented 8 years ago

David Dailey writes to www-svg:

Compare this page in Chrome and Firefox (Windows).

http://cs.sru.edu/%7Eddailey/svg/distortGrid0.svg Compare both speed and image quality.

You’ll note that Firefox performs much faster. I am told that this is because “Firefox on Windows uses Direct2D IIRC, and the filters are accelerated.” The speed issue should be handled later, according to the Chrome folks.

The troublesome thing, though, is the image quality. Observe the crispness of the edges in Firefox and the truly inadequate rendering in Chrome. When I raised the issue as a Chrome bug the folks there did a goodly amount of research on the history of the problem and it seems to be a spec bug rather than a browser bug, introduced when the filters module was split out from the rest. (I’m so not in favor of having 29 different specs and Working Groups – it’s always a prime number -- to talk to when I find a problem with SVG that wasn’t there before!)

I made a slightly simplest version here: http://cs.sru.edu/~ddailey/svg/distortGrid.svg without some of the aesthetic niceties.

I gather it comes from not specifying how the feDisplacement’s effect on its results will be resampled.

Ultimately, if an feDisplacement is applied to a vector image, we might, alternatively, approach the problem by doing a convolution of the respective functions (applying the continuous Perlin noise, functionally, to the associated curves themselves, resulting in vector rather than pixel-based output – but the math here could get gnarly – I don’t know. ) At any rate, having a flexible set of distortion filters that preserves vectors would be a good thing to think about for SVG3, if it’s not already in the wish lists. Having a bitmap at the core of spherical distortion in the test cases seems a bit like a hack to me!

In the meantime, can we require feDisplacement to do a little better? It’s not only a problem when vectors are distorted – note how the bitmaps crack and separate using Chrome here: https://ello.co/ddailey/post/iagdm_myjtpdy9dyvsoozq I can report that this sort of cracking didn’t happen in Adobe’s ASV, nor in Opera during its SVG-, so this is, indeed moving back to <SVG1.1. – of course the examples that used to work in ASV and Opera heyday (remember the fried egg-billiards and the face distorted by pond ripples from 2005?) no longer work anywhere (due to other manifestations of “progress” – I think I used enable-background or some such thing, since in those days you could!).

I agree that the strict pixelation of the Blink rendering is unfortunate. Based on quick testing in Edge, manually adjusting the parameters, it looks like they also re-sample the graphic when it is distorted. It's not quite as smooth as in Firefox, but better than the dithering effect created in Chrome.

One option could be to make rendering dependent on image-rendering, allowing browsers to pick their auto rendering but allowing authors to request smooth re-sampling when possible.

dirkschulze commented 6 years ago

@AmeliaBR I expect that this was a problem of screen resolution rather than an actual issue. The spec suggests to apply filter effects in the proper display resolution rather than the user space. On Mac I do not see a quality difference between Firefox, Safari and Chrome... just a huge performance difference.

The filterRes attribute was meant to allow applying filter effects into a different, user specified coordinate space but was removed from the spec.

Maybe image-rendering could be use to imply a smoother, less accurate image processing resolution. The real issue here, and a reason why filterRes was removed, is that it is unclear how some filters like feConvolveMatrix with kernelMatrix could be scaled up and down. We could make it up to the implementation to temporarily up- and downscale the intermediate image results for feConvolveMatrix. Maybe this is done today in some situations by implementations already though.

fsoder commented 6 years ago

Not a problem with screen resolution (w/ high pixel densities the problem will be less noticeable though, or require quicker changes in the displacement [normal] map to be easily noticed.) It's really about how you sample from the input image (in) - if you do so using one or more samples (Blink only uses one when using software rendering.) As mentioned in the bug link above, there's already a Note about using image-rendering in the spec. But it's only a Note.

AmeliaBR commented 6 years ago

I wouldn't consider this is a filter resolution issue. Even if filterRes still existed, bumping it up to a higher value to allow proper sampling of the distorted image would be a low-perf hack, because it would mean upscaling everything and maintaining that higher resolution until final downscaling at rendering. Ideally, the sampling would happen all at once when drawing the distorted layer.

By the way: Firefox seems to have changed their implementation since this issue was first posted. On Windows, they are now equally pixelated (though less janky in the animations) as Chrome.

dirkschulze commented 6 years ago

@AmeliaBR What exactly would image-rendering do? Could you give a concrete example? After all, your concern is performance AND have too low-res samples. How can image-rendering change this?

AmeliaBR commented 6 years ago

@dirkschulze

My suggestion was that image-rendering (smooth vs pixelated) would be used for the local scaling effect created by the distortion, and yes that could have performance impacts. So it would mostly only be useful for non-animated images.

In an animated example like the ones linked above, it would effectively indicate whether to compromise rendering smoothness vs refresh rate. Which might not make David happy, but something's got to give.

Personally, I'm happy with keeping it as a non-normative note suggesting that browsers should respect image-rendering: smooth if it doesn't compromise performance. The exact wording of the note could probably be more clear than the current text, though.

dirkschulze commented 6 years ago

@AmeliaBR Making this non-normative makes it harder in the future to actually make it normative IMO. Especially when property and property values are involved. Though we do have at least 2 informative references inherited from SVG 1.1 which say:

Depending on the speed of the available interpolates, this choice may be affected by the image-rendering property setting.

Do you suggest using this property per primitive or for the entire filter operation? If the latter, then this might interfere if the property actually gets applied on an image together with a filter. So I assume you mean the former.

I'd like to get more concrete into your proposal. Lets say you have a 1000x1000 dot input source and a 1000x1000 dot displacement map. How would the image-rendering behave? Scale both up (oversampling) or down dependent on the value of the property? Leave the actual scale level to the implementation? What indication or suggestion would we give implementations? What is the desire of pixelated? To get to 60fps animations?

AmeliaBR commented 6 years ago

Do you suggest using this property per primitive or for the entire filter operation?

I'd expect it to apply per filter primitive, similar to color-interpolation-filters.

dirkschulze commented 6 years ago

@AmeliaBR We would need to reinterprete image-renderings values:

auto: Use device pixels (current behavior) crisp-edges: Oversampling. Up to the implementation how much. pixelated: Downscaling for better performance.

Not that this does not give us the possibility to apply image filters for input or output images (bilinear interpolation/nearest neighbor) in the future.

AmeliaBR commented 6 years ago

I'd like to get more concrete into your proposal. Lets say you have a 1000x1000 dot input source and a 1000x1000 dot displacement map. How would the image-rendering behave? Scale both up (oversampling) or down dependent on the value of the property? Leave the actual scale level to the implementation? What indication or suggestion would we give implementations? What is the desire of pixelated? To get to 60fps animations?

You wouldn't scale either image as a whole. You would sample individual pixels. If a black pixel at (0,0) and a white pixel at (0,1) end up displaced such that they are two extra pixels in between them with no directly matching pixel from the source image, it's a question of how do you draw those in-between pixels? Do you alias them to be either black or white? Or do you shade them in gray, according to the distance between the black pixel and the white one?

In other words, for a input image like this (4x1, alternating white and black pixels):

 ▓ ▓

That gets displaced by a gradient, to create the following map of X-displacements:

0369

In other words: the first (white) pixel at (0,0) stays where it is. The second (black) pixel at (1,0) gets shifted 3 pixels right, so that it is at (3,0). All the other pixels get displaced out of the 4x1 bounds.

The question is: How do you color the result pixels at (1,0) and (2,0)?

Should the outcome look like this, each missing pixel aliased to match its nearest neighbour?

  ▓▓

Or like this, with the colors of the missing pixels interpolated as a weighted average of their nearest neighbours?

 ░▒▓

For the image-rendering values, I would expect:

dirkschulze commented 6 years ago

@AmeliaBR The Filter Effects spec is very clear about what should happen for undefined pixels (which may be the case for pixels that are actually not set by displacing the source input pixels to those pixels for example.)

Sometimes filter primitives result in undefined pixels. For example, filter primitive feOffset can shift an image down and to the right, leaving undefined pixels at the top and left. In these cases, the undefined pixels are set to transparent black.

What you suggest is more in line of what I was thinking of what image-rendering is good for. What you describe is actually an image filter. Currently supported filters are nearest neighbor and bilinear interpolation. Of course there are more but they are not as commonly implemented in graphic libraries.

feDisplacementMap currently does not include a post processing step which does this interpolation you ask for. With the exception of feImage there is no image interpolation in use with the exception of rendering the last filter primitive result to screen. Here there could be differences between interpolations though. And maybe that is what you saw for Firefox?

IMO an interpolation filter is a nice thing to have in certain situations but since there is no implementation experimenting for it, I would be in favour for deferring this request and adding it to the second level of the specification. What do you think?

dirkschulze commented 6 years ago

@AmeliaBR Are you fine with deferring to level 2?

AmeliaBR commented 6 years ago

Certainly, any interaction with image-rendering should be deferrred.

Regarding David's original complaint, maybe it's worth adding a warning to authors? "This version of the specification does not define the interpolation method to be used when distorting the source image, and different user agents may render the effect differently.