software-mansion / react-native-svg

SVG library for React Native, React Native Web, and plain React web projects.
MIT License
7.43k stars 1.12k forks source link

support macos & windows #297

Closed liuhong1happy closed 2 years ago

liuhong1happy commented 7 years ago

Links

liuhong1happy commented 7 years ago

I write a windows similar project,react-native-windows-svg.

Welcome to contribution code!

liuhong1happy commented 7 years ago

react-native-windows-svg had supported SVG components, as follows:

sonaye commented 5 years ago

@msand Why not add macOS as a target? that should cover react-native-macos

msand commented 5 years ago

@sonaye I'm considering a rewrite in c++ or rust because of the fabric rewrite of react native, check here: https://github.com/react-native-community/react-native-svg/issues/878 This would simplify cross-platform efforts significantly

shergin commented 5 years ago

@msand Seems Fabric is buildable in OSS, so maybe it's time! I would love to discuss it and provide any help I can.

msand commented 5 years ago

@shergin Great to hear, what approach did you have in mind?

msand commented 5 years ago

@RazrFalcon Any suggestions on what would be the best way of integrating resvg in react-native? Is the azul approach (uses the resvg parser, usvg simplification and the lyon triangulation libraries) reasonable? The content in this case can be dynamic rather than static, and would need fill and stroke hit testing / event handling for touch and gestures. https://github.com/maps4print/azul/wiki/SVG-drawing

Thanks for your excellent work on resvg btw, I'm a huge fan.

RazrFalcon commented 5 years ago

@msand If you plan to use resvg not from Rust then you probably should convert SVG to uSVG via usvg. resvg is just a renderer, so you don't need anything from it. And usvg doesn't have a C-API, and I'm not sure it ever will be.

Note that with this approach you still have to implement the text layout (see below) and filters manually. This is the hardest part. Everything else should be relatively easy (1000-2000 LOC in most of the languages).

Also note, that text elements will soon be converted into paths, so you don't have to worry about this either. And text layout is one of the hardest parts of SVG.

UPD: I've just looked at the sources, and looks like you have a pretty good SVG support. Not sure why do you need resvg.

shergin commented 5 years ago

I am not sure about the exact approach to render stuff (use custom implementation or resvg). I think we should find a balance between cross-platform-ness and performance (relying on CoreAnimation primitives and CALayers should perform best... probably?).

From RN/Fabric perspective, the implementation should be similar to existing Text implementation ( https://github.com/facebook/react-native/tree/master/ReactCommon/fabric/components/text ) where a bunch of ShadowNodes of different kinda construct some plain C++ object (that represent whole SVG structure) and pass that to main-threaded renderer on View-layer.

So, compare to existing implementation (on iOS) all SVG nodes will be represented as a single UIView subclass that "knows" how to draw the SVG object. And only this part will be platform-specific.

msand commented 5 years ago

@RazrFalcon At least for e.g. windows and linux, we don't have any logic/support at all yet. Also, some of the content people use this library with is entirely static, these cases could perhaps benefit from the whole resvg > usvg > lyon setup no?

So rather than double the number of codebases and programming languages used in this project, we would utilize or create some c/c++/rust svg stack and ffi glue to share as much as possible across platforms. Svg rendering shouldn't have any platform dependant output anyway, and the tessellation from lyon should be efficient to render with OpenGL, Vulkan or D3D. Using e.g. OpenGL on iOS and allowing to apply natively animated transforms on separate layers should allow significant performance improvements compared to the current implementation.

I've also noticed that text layout seems like the hardest part to get right, and text-on-a-path doesn't make it any easier. Also, only about 50% of the filters have been implemented in the filters branch which has become stale by now. That's only on iOS, as I haven't ported them to Android yet, partly in anticipation for the fabric rewrite of react-native, and possible cross-platform rewrite of react-native-svg. So I've been thinking perhaps it would make most sense to combine efforts somehow rather than keep reinventing the same logic in many languages/libraries.

msand commented 5 years ago

@shergin Any example of a react-native-module outside core built using fabric (ideally something which has custom rendering of some sort)? Or some collection of commits showing a rewrite from ViewManagers to the new pattern? Will art be rewritten to use it as well?

RazrFalcon commented 5 years ago

@msand I'm not familiar with lyon's tessellation and GPU rendering in general, so I can't help here for sure.

benefit from the whole resvg > usvg > lyon setup

Not sure about this order. Currently, it's the other way around. resvg is basically a rendering backend which translates usvg's render tree into backend's primitives. The only logic it has right now is filters. But they are CPU bound and mostly backend specific. And probably will be useless on a GPU. So you doesn't really need resvg, only usvg, which does all the magic.

people use this library with is entirely static

It's possible for usvg, but it still links system libraries for fonts querying (directdraw, coretext, fontconfig+freetype). So it's not completely self-contained.

I think usvg is the perfect (and only) foundation for this kind of project, so I'm willing to help as much as possible.

msand commented 5 years ago

@shergin I've collected most of the IDL specs needed for svg in the idl/ folder. I've tried to get the Fabric version of RNTester to build, but haven't got it working yet. Is there any specific commit on the master branch which works? Or a bit more instructions on how to get started?

I understand the codegen is supposed to use js/flow as the source of truth, but couldn't figure out how to set that up yet either, the repo/web seems very thin on guidance/examples with regards to this. I was thinking I'd generate the flow types from the idl (as they're the source of truth in this case). And probably make it work for some minimal subset first, like rendering Rect elements with e.g. varying dimensions, transforms, fill and stroke, to ensure that the foundation is solid, native animations and gestures work correctly and have everything optimized before building more on top.

I'm thinking it probably makes sense to have a rendering backend abstraction, to allow using different renderers and compare performance for specific workloads. https://github.com/servo/pathfinder (will be used in webrender for e.g. svg) https://github.com/intel/fastuidraw (pure gpu based, very fast) https://blend2d.com/ (nice Anti-Grain Geometry implementation) https://skia.org/ (fast and relatively complete/stable)

These all have very interesting approaches/performance characteristics and would be interesting to compare to the native backend (android is largely built on skia anyway, using skia on all platforms would make rendering more consistent, without changing the current output on android, also, chrome/firefox/flutter etc use skia).

Doing all the rendering in rust or c++ and the gpu, and then merely rendering the resulting bitmap/texture using the native view layer, would eliminate all extra marshalling/ffi overhead between e.g java and c++. The js would construct the shadow nodes with JSI, then the renderer does all the work in c++, and makes a single call into java/obj-c and the native view hierarchy where a single native View object represents the entire svg element and all its ancestors. The native view would call back into the c++ to handle hitTesting, alternatively the necessary bounds, paint and path data would need to be made available to java/obj-c as well.

@shergin Does the new architecture allow animating properties of shadow nodes which don't have a corresponding native view? And how do I redirect touch events to the correct element and event handler in javascript , if only the root creates an actual native view (rather than just a shadow node)?

I'm not sure how well usvg would play together with the gesture responder / hit testing system, as any rendered element can have event handlers.

@RazrFalcon is there any way to know what parts of the output correspond to what parts of the input, to allow us calling the correct handler? To what degree do the elements get merged? It might not be an issue, if we can somehow forbid specific elements from getting merged/simplified too much, e.g if they have event handlers and thus need to have a bijection for their identities between input and output.

RazrFalcon commented 5 years ago

usvg is purely static. It's hard to explain what usvg actually does, but the resulting SVG/render tree is very different from the original one. It will be easier if you tell me what kind of information do you plan to query. Like objects list, their bboxes, etc.

Also, text rendering is a big question, since usvg converts text into paths. So you can't select it (this can be fixed).

msand commented 5 years ago

@RazrFalcon The current gesture responder system seems to require that each element, with touch event related handlers, has correct bounds on the View representing it and all its ancestors, otherwise the tracking system only recognizes the first touch event correctly, and thinks it's outside the bounds if any movement happens, disabling touch handlers to trigger on release/ending of the gesture. Bounds and ctm would be the very minimum. Ideally supporting all the SVGBoundingBoxOptions for getBBox https://www.w3.org/TR/SVG/types.html#InterfaceSVGGraphicsElement

Currently the hit testing triggers if the point is either inside the fill or the stroke of the element. But, ideally, we would support all pointer-events values from https://www.w3.org/TR/SVG/interact.html#PointerEventsProp

And the GeometryElement interface:

  boolean isPointInFill(optional DOMPointInit point);
  boolean isPointInStroke(optional DOMPointInit point);
  float getTotalLength();
  DOMPoint getPointAtLength(float distance);

https://www.w3.org/TR/SVG/types.html#InterfaceSVGGeometryElement

With regards to text, we also convert everything to paths at the moment, except if the inline-size is set, in that case we use the native paragraph rendering api, but this is likely to need rewriting to get shape-inside and shape-outside support. Converting the text to paths, allows using e.g. text for clipping/masking, and as the path for other text inside a textPath element, so you can render text along the curves of glyphs from other fonts, possibly with animation of font-weight and other fontFeatureSettings and fontVariationSettings for opentype fonts which have variable font support.

msand commented 5 years ago

Oh, and the SVGSVGElement interface has these as well:

  NodeList getIntersectionList(DOMRectReadOnly rect, SVGElement? referenceElement);
  NodeList getEnclosureList(DOMRectReadOnly rect, SVGElement? referenceElement);
  boolean checkIntersection(SVGElement element, DOMRectReadOnly rect);
  boolean checkEnclosure(SVGElement element, DOMRectReadOnly rect);

https://svgwg.org/svg2-draft/struct.html#InterfaceSVGSVGElement

RazrFalcon commented 5 years ago

@msand It's hard, but possible. The main problem is that resvg/usvg doesn't have any rendering code in it, so I don't know if a point is inside a stroke or fill. Same with intersections. So all the algorithms should be written from scratch.

Also, I'm not sure how links (a) should be handled.

The real problem is that usvg provides an interface for a read-only render tree via C-API. You can query some information, but you can't change it. Any kind of modifications are available only from Rust.

msand commented 5 years ago

@RazrFalcon Doing the rewrite in rust is certainly a possibility. Although I haven't done any significant work in it, I've read quite a bit, enjoy the syntax and semantics, and been looking for some project where it would make sense to apply. Also, it seems rust might attract more contributors than c++ for this kind of project. There's also a react-native project written mostly in rust: https://github.com/paritytech/parity-signer/

Writing the algorithms from scratch wouldn't be an issue. If it would be cross-platform it would still make the long term development and maintenance burden smaller, to get other platforms than ios and android working with consistent output and behavior.

In this case, the new svg user agent could be embedded either as a standalone svg solution for native apps, or used thru bindings from some javascript environment, e.g. using JSI in react-native. The main/first use case would be for react-native, but the idea would be to make it work in flutter and other similar frameworks as well.

So in these cases we already have a runtime available, where we can have event handlers on the elements and use that to handle changes in navigation/application state, so there's no real need for the script element or hyperlinks. There's no guarantee of a uri like address/location, so an svg a element would behave just like a g element. Except if the download attribute is set, in which case a download dialog for the href might make sense if the platform/runtime supports it, but that could also be an event handler without side-effects, allowing the developer to handle it appropriately themselves.

Also, animation would probably be provided by the framework/runtime (we might add the animation parts from the spec, if everything else is done) we just need to provide property setters for all the elements, functions to do layout/calculate a render tree, render tree to pixels/texture, and implementations for the aforementioned element interfaces (hopefully as much as possible of the svg 1.1 and 2.0 specs). Ideally a render tree could reuse almost everything, if only a single property of a single element is animated.

RazrFalcon commented 5 years ago

I have no plans on doing animations, mainly because of complexity. And since SVG has no reference implementation, no one know how they should work anyway. The official test suite for SVG 1.1 has like 80 tests and that's it. But since those test have PNG images for references - they are useless.

As for API, I will slowly try to add the required methods.

msand commented 5 years ago

I realized the href could be used to open a browser, or even other installed apps depending on what protocol handlers are registered. But, this probably belongs in user provided event handlers and/or platform specific but reusable components wrapping/overriding the plain a element.

msand commented 5 years ago

If I'd implement support for the animation part of the spec, and there's any ambiguities, I would probably align it with the current behavior in whatever set of browser have the most users in common, or if they're all inconsistent, with what seems the most expectable/desireable from a user perspective, or with the Animated library used in react-native. As the native animation driver from Animated can be used to keep several animations in sync with acceptable performance, and has significant use in the community.

There might be some performance optimizations which can be done, by limiting the logic to the subset of data types used in the svg spec, but the core logic already exists in Animated and I'd like to preserve compatibility with it.

It might even be possible to implement the animation spec by converting it to Animated components (or react-native-reanimated if limitations come up) using either the plain js api, or, if we want to support animating the properties and attributes of the animation elements as well, then we'd need to have native shadowNodes representing them. But essentially just helpers to define interpolations using Animated in a declarative way.

So in this sense, the renderer backend just needs to support setters for the input values and to re-render the output. Animation wouldn't be a concern for the rendering abstraction, but enabling reuse of the unchanged parts of the render tree, if a setter is called on some element in a svg fragment and a new render tree is requested, would likely make a large impact on the memory and cpu pressure for this use case, or any where the state/input can change and re-rendering is needed, not only animations.

Would probably require having the output as an immutable data structure, and correctly invalidate dependant parts when values change, and lazily recompute when a new render tree is requested. So the complexity tradeoff might be quite significant, and possibly, much of the time would be spent elsewhere even if the entire render tree is recomputed, depending on use case. Simple static trees would get slower, but complex scenes with many changing values, and significant unchanging parts, would likely see big benefits when the changing/unchanged ratio goes low.

msand commented 5 years ago

Here's an example of an animation translated to the Animated and Easing api: https://snack.expo.io/@msand/animated-svg-with-bezier-spline-calcmode

import * as React from 'react';
import { Animated, Easing, PanResponder, View } from 'react-native';
import { Svg, Circle } from 'react-native-svg';
const AnimatedCircle = Animated.createAnimatedComponent(Circle);
const AnimatedSvg = Animated.createAnimatedComponent(Svg);

function animateSpline({
  values,
  dur,
  repeatCount,
  begin,
  keyTimes,
  keySplines,
}) {
  const duration = dur * 1000;
  const t = new Animated.Value(keyTimes[0]);
  const splines = keySplines.map((spline, i) => {
    const [x1, y1, x2, y2] = spline;
    const fromValue = keyTimes[i];
    const toValue = keyTimes[i + 1];
    return Animated.timing(t, {
      toValue,
      delay: i == 0 ? begin : 0,
      duration: duration * (toValue - fromValue),
      easing: Easing.bezier(x1, y1, x2, y2),
      useNativeDriver: true,
    });
  });
  const iterations = repeatCount === 'indefinite' ? -1 : +repeatCount;
  const animation = Animated.loop(Animated.sequence(splines), { iterations });
  const value = t.interpolate({
    inputRange: keyTimes,
    outputRange: values,
  });
  return { t, animation, value, splines };
}

export default () => {
  /*
    <circle cx="16" cy="16" r="16">
      <animate 
        attributeName="r" 
        values="0; 4; 0; 0" 
        dur="1.2s" 
        repeatCount="indefinite" 
        begin="0" 
        keytimes="0;0.2;0.7;1" 
        keySplines="0.2 0.2 0.4 0.8;0.2 0.6 0.4 0.8;0.2 0.6 0.4 0.8" 
        calcMode="spline" />
    </circle>
  */
  const { animation, value } = animateSpline({
    values: [0, 4, 0, 0],
    dur: 1.2,
    repeatCount: 'indefinite',
    begin: 0,
    keyTimes: [0, 0.2, 0.7, 1],
    keySplines: [
      [0.2, 0.2, 0.4, 0.8],
      [0.2, 0.6, 0.4, 0.8],
      [0.2, 0.6, 0.4, 0.8],
    ],
  });
  animation.start();
  return (
    <AnimatedSvg width="100%" height="100%" viewBox="0 0 32 32">
      <AnimatedCircle cx="16" cy="16" r={value} />
    </AnimatedSvg>
  );
};
msand commented 5 years ago

@RazrFalcon I'm thinking that implementing some immutable structure with proper invalidation is probably a bit high on the complexity side. But, at least for e.g. pan, zoom and rotate gestures, it would be great if a changing transform can be applied while reusing most/all of the render tree. I think it's only the vector-effects which depend on the CTM and would require special treatment, or in the worst case, recalculating the entire tree.

The responsibility to optimize and split content into separate svg roots, with varying frequency of change and static parts, would be on the developer in this case. React already provides a layer for doing diffs on immutable trees, and short circuiting when the results are correctly memoized.

Having yet another layer of dependency tracking/invalidation logic in the rendering backend might just slow things down, especially for content where almost all the content / properties keep changing at full framerate. I would expect VR applications to have tailor made rendering logic, but I would already aim for 100+ fps to enable those kinds of use cases as well.

For the zoomAndPan attribute, the logic can stay completely on the native side, and doesn't need to call into the runtime/javascript/react, unless there are registered event listeners. So in this case there is no dependency tracking overhead or diff checking in use. The gesture handler could merely set the currentScale and currentTranslate, and cause a rerender of the render tree with the updated transform. Adding a currentRotate attribute to enable rotation while zooming and panning would also be great. Forcing that to be much more expensive or use a very different api doesn't seem like the most developer/user friendly approach.

https://svgwg.org/svg2-draft/struct.html#InterfaceSVGSVGElement attribute float currentScale; [SameObject] readonly attribute DOMPointReadOnly currentTranslate; https://svgwg.org/svg2-draft/types.html#InterfaceSVGZoomAndPan

RazrFalcon commented 5 years ago

I'm not sure how this is related to resvg. usvg converts an input SVG into a very simple render tree. Then resvg traverses this tree and renders its nodes. That's it. There are no state to store and animate.

So you have to write a new resvg backend for your drawing library that will support animation. It cant be done in a backend-independent way.

Moreover, the main part is to parse animation properties from the SVG itself. Which is far from trivial.

As for currentScale and currentTranslate, I don't think that this should be implemented on the resvg side. It should be done by a viewport itself, which resvg doesn't have.

msand commented 5 years ago

Well, in our case the svg input is created on the fly, and the Animated api handles animating all properties sufficiently well for most use cases already.

The svg animation spec isn't necessary, just making a few helpers to translate the common usage patterns to Animated, probably does more than 99% of what's actually used of smil.

So I'm just saying, that if we've given a svg fragment as input already, and only the outermost transform changes, it would be great if we could reuse the unchanged parts.

We can of course give the entire svg fragment again, but just wrapping the render tree with a group transform which we can change without recomputing the rest, would allow sharp rerendering when the scale changes / parts move inside the clipbounds.

msand commented 5 years ago

Will usvg support vector-effects (at least non-scaling-stroke)? https://svgwg.org/svg2-draft/coords.html#VectorEffectProperty

It requires adding the property to path and image, as they're the only graphics elements in the output. Vector effects/non-scaling-stroke at least depends on the viewport (and nested viewports): https://svgwg.org/svg2-draft/coords.html#VectorEffectsCalculation https://svgwg.org/svg2-draft/coords.html#NestedVectorEffectsCalculation

RazrFalcon commented 5 years ago

So I'm just saying, that if we've given a svg fragment as input already, and only the outermost transform changes, it would be great if we could reuse the unchanged parts.

Are you talking about the render tree or the rendered layers/objects? resvg doesn't have a state, so you cannot apply a transform to it.

resvg has nothing similar to a web browser, were you can modify some node/attribute and it will update the viewport. resvg simply renders nodes to a canvas. One after another.

RazrFalcon commented 5 years ago

Will usvg support vector-effects

SVG 2 is out of scope.

msand commented 5 years ago

Well, if we don't have any reuse when reconstructing a render tree for some svg fragment where some properties have just changed, then I'd fall back to at least optimizing the case of only changing the outermost transform, by reusing the entire render tree, wrapping it in a G element, and mutate the transform of that outermost G element when needed. And, recompute the entire render tree if something else changes. The rendering backend should certainly be stateless, it's just the display list / render tree I would like to reuse as much as possible.

RazrFalcon commented 5 years ago

I don't understand why do you need to recompute anything. Transform impacts only rendering. It doesn't affect a render tree in any way.

msand commented 5 years ago

Yes that's correct, which is why I would optimize that at the very least. I was thinking if you intended to simplify the paths with vector-effects into ones without it, then the render tree would depend on the CTM.

In general the event listeners can cause any kind of state change, and any or all properties can change or not. If the entire render tree needs to be recomputed even if a single property changes deep in some fragment, it might be worthwhile to optimise reusing of the tree. But until it's a replicated and measured issue, it might not require further consideration.

RazrFalcon commented 5 years ago

Again, there is nothing to optimize, because resvg doesn't work like that. It always renders the whole tree. It's not a browser.

msand commented 5 years ago

@RazrFalcon Sorry if I'm not expressing myself very well, a bit slow weekend atm.

Yeah, I'm thinking of usvg, as that's what constructs the render tree for resvg-(skia/qt/raqote) to consume, no? Usually we wouldn't have/create any xml strings (unless there is only a static svg file a user wants to render), we already have a vdom in react which corresponds to a svg xml fragment/dom.

Instead when react commits a new output, we would have our bindings (in our shadowNodes) call into usvg/svgdom to create an input for usvg to process, this would create a render tree, which resvg would use with whatever rendering backend it's been built together with, and the resulting bitmap would be given to the native render api and composited into the window using a single native view.

When react commits some changes, the property setters get called for every changed property in every shadowNode, at this point our bindings would change the values in the current usvg/svgdom representation, create another render tree with usvg > render with resvg > native compositing. Does this sound reasonable to you?

RazrFalcon commented 5 years ago

Yes, this is exactly how it works. Everything depends on what properties do you plan to change.

If you want to scale a particular shape than there is not need in render tree recalculation. Also, you don't even need an SVG file, since you can create a render tree from scratch. It's just a tree of commands for resvg.

But this is an usvg side. The problem is that resvg doesn't know anything about tree modifications. There are no event loops or anything. And resvg doesn't preserve anything. So if you want to move one shape by 5px along the X axis - you have to render the whole image again.

As long as you need a dynamic SVG, you need a dynamic, stateful renderer (like in browser or vector editor), which resvg doesn't have. But it can be implemented.

msand commented 5 years ago

I think for now the current approach we have is fine. At least it seems to work good enough in the current renderers. We preserve the paths for elements and groups where it's safe, but the entire svg fragment still causes render calls to happen on every render, and resets the paint for each element. Right now it has lots and lots of state trashing, making it much slower than it could be, and people certainly ask for better performance. But that's more about optimizing the rendering logic, than reuse of a displaylist/render tree.

Android has support for RecordingCanvas in the upcoming API 29. I'm not sure how much of that depends on skia, and how much of it is in other layers. And not sure how big the possible benefits are. But if it's possible to create relatively easily with skia, it might be an interesting optimization opportunity for e.g. resvg-skia. This would make it available across more android api levels as well. https://developer.android.com/reference/android/graphics/RecordingCanvas

RazrFalcon commented 5 years ago

The easiest and most robust variant is to simply cache all the data (like browser does). But this should be implemented for each backend separately (because it will be slower and will consume much more RAM than the current algorithm). So we don't need to create layers, native paths, styles, etc. each time.

But, there are also a lot of things that can't be optimized. Like complex paths rendering, filters (especially blur), compositing, clipping and masking.

msand commented 5 years ago

So, with regards to the code that would be in this library in the bindings, the main optimization opportunity would be to know when we need to redo the usvg render tree calculation, and when can we reuse what from the existing one. Either just change the values there directly, or if we treat it immutably, create a new one with structural sharing.

Some reuse in renderer backends can possibly be built as a layer on top of existing ones, but should probably be optional, to allow tuning for various tradeoffs. Having the developer split static parts into separate roots and position them absolutely to achieve caching of unchanging content, delivers a lot of the possible benefit with no additional complexity or logic needed in the lower layers. So that's probably good enough as is for now. But, making an alternative backend with bindings to the android RecordingCanvas would be an interesting benchmark, once we have the main pipeline working.

RazrFalcon commented 5 years ago

You can change any part of usvg render tree as you want. There is no need in recalculation, since it doesn't have any inheritable properties, unlike the SVG itself.

As for resvg, the most expensive part is layers. An we need a lot of them. Currently, resvg is specifically designed to reuse those layers. But this method is completely unsuitable for your case.

msand commented 5 years ago

The thing is, the input we get is normal svg including inheritance (no cascading, as we don't support css selectors), and when the properties of some ancestor changes, and descendant elements inherit that property, we would need to recalculate the render tree, ideally only for the parts of the tree where inheritance actually has an effect, that would be the main optimization in comparison to recalculating the entire render tree as soon as some inherited property changes. Similarly, if any affected element is referenced in a clipPath, filter or mask element. But this would require having dependency tracking in the shadowNodes, to know what parts of the usvg output is invalidated by changes in the svgdom input.

RazrFalcon commented 5 years ago

If you want to change a property in the SVG itself, than you have to recreate a render tree from scratch. I don't think that it's possible to do a partial update. SVG is too complex and interconnected for that.

PS: svgdom was removed in the master.

msand commented 5 years ago

You can think of a react-native app a bit like compiling your own tailor made browser, and we're providing the implementation for the svg idl spec, such that react can be used to render the ui and attach event handlers.

RazrFalcon commented 5 years ago

I understand, but resvg has a far different architecture from a browser.

msand commented 5 years ago

Sure, but for now I still think svgtree+usvg+resvg+resvg-skia is the main candidate to base a rewrite on, although fastuidraw seems very interesting as well, it could be built as another resvg backend instead.

We can just recalculate the entire render tree as soon as any inherited property changes, or any referenced element has changed in any way. The case for only changing the outermost transform can probably be handled without recalculating the render tree, at least if we skip vector-effects / any ctm dependant behavior.

Although non-scaling-stroke would only require passing the vector-effects value through usvg to the backend, such that it can be utilized if the specific rendering backend has chosen to implement support for it.

msand commented 4 years ago

@RazrFalcon I got an idea for a simpler way to attempt an initial integration of react-native and resvg. Could focus on the use case of static svg files, and merely give a path or file handle to resvg to render static icons etc.

I'm wondering. Are there any optimisations / opportunities for reuse if the same icon is rendered several times with e.g. different bitmap size? Is it reasonable/possible to do the usvg processing at build time rather than at run-time?

The faq / answer here is a bit confusing: https://github.com/RazrFalcon/resvg/tree/master/usvg#how-to-ensure-that-svg-is-a-valid-micro-svg

Seems like if the svg content is static, and the versions of usvg and resvg aren't changing, it should be fine and even preferable for performance reasons to reuse the usvg output and only do that processing at build time % content % version changes.

RazrFalcon commented 4 years ago

Are there any optimisations / opportunities for reuse if the same icon is rendered several times with e.g. different bitmap size?

Sort of. You can skip the parsing/preprocessing stage by keeping the usvg::Tree around. But in most cases, rendering is like 80% and parsing is just 20%. It's heavily depend on the file itself, but usually, the parsing is fairly fast. You can use the --perf flag on rendersvg CLI tool to see the real numbers.

The faq / answer here is a bit confusing

usvg dumps a preprocessed, simplified SVG. But to load it back, I have to do all the parsing all over again, because I can't guarantee that this file was actually created by usvg and was not modified. So we still have to do all the checks. Yes, we can write a dedicated parser just for SVG files generated by usvg, but it will take like 2000-3000 LOC and wouldn't be that much faster.

Anyway, my point is that parsing time in like 90% of cases is neglectful. Something like a Gaussian blur will take times longer.

stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You may also mark this issue as a "discussion" and I will leave this open.

assertchris commented 4 years ago

So, can we open this up again, since MS are supporting windows + macOS?

ayushpurnarawat commented 4 years ago

how to use svg in react-native-macos

ospfranco commented 4 years ago

Hey, this discussion kinda went over my head, but are there any news on this front? there is also a duplicated ticket now: https://github.com/react-native-community/react-native-svg/issues/1426

SamuelScheit commented 3 years ago

any updates?