w3ctag / design-reviews

W3C specs and API reviews
Creative Commons Zero v1.0 Universal
334 stars 56 forks source link

Declarative Shadow DOM #494

Closed mfreed7 closed 3 years ago

mfreed7 commented 4 years ago

Hello TAG!

I'm requesting a TAG review of Declarative Shadow DOM.

A declarative API to allow the creation of #shadowroots using only HTML and no Javascript. This API allows Web Components that use Shadow DOM to also make use of Server-Side Rendering (SSR), to get rendered content onscreen quickly without requiring Javascript for shadow root attachment and population.

Further details:

We'd prefer the TAG provide feedback as:

☂️ open a single issue in our GitHub repo for the entire review

alice commented 4 years ago

Thanks for the thorough explainer and links out to previous discussion!

I've started working my way through them, but I haven't really seen the full use case spelled out: that is, can you explain or point me to a part of the explainer or previous discussions which explain what the need is for server-side rendering/no-JS solutions for Shadow DOM?

For example, the explainer mentions search engine indexing, but I don't have a good mental model of a situation in which content in shadow DOM would be relevant for indexing.

Similarly, for users running with no JS, under what circumstances would providing the content within Shadow DOM (as opposed to slotted) make a critical difference to the experience?

Glancing through the previous discussions, it seems like there is strong developer demand for this, but I gather there is context which isn't included in those discussions, which it seems like I've missed.

mfreed7 commented 4 years ago

Thanks for kicking off this review!

So your question seems to be why Shadow DOM is needed in SSR/no-JS content? I.e. there is a section of the explainer that details why people want No-JS, but you're asking why they need Shadow DOM with that?

If so, I think the main reason would be that for people using Web Components, they have custom elements that use Shadow DOM. Those components assume the style encapsulation of Shadow DOM, and have stylesheets built accordingly. If those sites are to be SSR'd, and if there is no way to stream out the composed page including the Shadow DOM, then the SSR library would have to do significant work to essentially re-write the entire page in terms of just light-DOM. On the other hand, if a declarative Shadow DOM primitive existed, a simple call to getInnerHTML() would retrieve the entire SSR content that should be streamed to the client. That would be less work on the server, and also on the client, since that composed page could be re-used as-is once the components are hydrated. No need to wipe out the declarative content and re-build it.

The above paragraph is addressed to the SSR and no-JS-user cases, because the goal there is to deliver pixel-correct output via declarative content. The search engine indexing case is a bit different, and as you point out, the "interesting" content is likely slotted in from the light-DOM. But in the discussions I've had with developers, they seem concerned that there is a risk of not properly representing the content to crawlers if the entire shadow DOM is missing.

LMK if that answers your questions!

alice commented 4 years ago

Thanks for the quick response.

It seems like the underlying problems that this API is looking to solve (primarily by allowing server-side rendering to occur) are:

Is that right?

kenchris commented 4 years ago
  • allowing content in Shadow DOM to be indexed by search engines (it would still be good to have a working example to understand why that is desirable).

This has been quite desirable before, especially for e-commerce - that was mostly due to the Google and Bing crawler using very old browser engines (I think Google Search used to use M42 until a year or so ago), as well as being quite slow at indexing with JavaScript enabled. Like it might index immediately without JS but then it could take a week or so before it got reindexed, which has been a no-go for a lot of companies.

I personally know companies who have disregarded Web Components due to not supporting SSR as it would result in very poor SEO and kill their businesses.

On the other hand, it seems that both G Search and Bing are using modern engines now that are being updated on a good cadence. I am still not sure if you can rely on things being indexed quickly with JS enabled.

Maybe Martin knows better (@AVGP)

mfreed7 commented 4 years ago
  • getting pixels on the screen quickly, even if those pixels aren't interactive yet (or, alternatively, avoiding layout shifting after javascript is loaded), and
  • allowing content in Shadow DOM to be indexed by search engines (it would still be good to have a working example to understand why that is desirable).

Is that right?

Yes, that is right. I think there are two other compelling use cases for this feature, in addition to those two:

I personally know companies who have disregarded Web Components due to not supporting SSR as it would result in very poor SEO and kill their businesses.

On the other hand, it seems that both G Search and Bing are using modern engines now that are being updated on a good cadence. I am still not sure if you can rely on things being indexed quickly with JS enabled.

I have also heard this objection/concern from developers several times. Even if they want to use Web Components, they can't, because SSR is seen as a hard requirement.

alice commented 4 years ago

Thanks so much for the additional detail, and I promise we won't be talking about use cases forever, but I just have a few more clarifications to ask for:

  • Allowing serialization and deserialization of DOM that includes Shadow DOM. Currently, there is no way to get innerHTML including shadow roots, because there is no declarative way to represent it.

Can you give an example of why this is needed? e.g. if you're using a custom element, doesn't the custom element know how to re-create its own Shadow DOM, meaning the light DOM is sufficient?

  • CSS developers, interested in using the style-scoping feature of Shadow DOM, do not want to use (or their design system prohibits) Javascript.

This makes me somewhat uneasy. My understanding isn't that style scoping isn't a "feature" of Shadow DOM that you can choose to use in isolation, but rather a part of the encapsulation guarantees that Shadow DOM provide, in order to ensure that components using Shadow DOM can be safely re-used in any context.

This encapsulation has consequences for things like:

Do you know to what extent Shadow DOM is being used in this way? Have I misunderstood?

Even if they want to use Web Components, they can't, because SSR is seen as a hard requirement.

To what extent is this based on a (possibly outdated) perception of how search engines work, vs a requirement for rapid rendering on the client?

Sorry for all the meta-discussion, but I'd really like to fully understand the context for this feature before trying to form a solid opinion on the design.

mfreed7 commented 4 years ago
  • Allowing serialization and deserialization of DOM that includes Shadow DOM. Currently, there is no way to get innerHTML including shadow roots, because there is no declarative way to represent it.

Can you give an example of why this is needed? e.g. if you're using a custom element, doesn't the custom element know how to re-create its own Shadow DOM, meaning the light DOM is sufficient?

Two reasons, I think:

  • CSS developers, interested in using the style-scoping feature of Shadow DOM, do not want to use (or their design system prohibits) Javascript.

This makes me somewhat uneasy. My understanding isn't that style scoping isn't a "feature" of Shadow DOM that you can choose to use in isolation, but rather a part of the encapsulation guarantees that Shadow DOM provide, in order to ensure that components using Shadow DOM can be safely re-used in any context.

This encapsulation has consequences for things like:

  • label association and other IDREF-based associations (yes, we are working on script-based solutions to these problems but in a no-JS context the associations would still fail),
  • user scripts and customisations like extensions (ChromeVox classic had to be retired, for example, because it was impossible to make it work with Shadow DOM, due to the event path being obscured)
  • testing tools - many testing tools have been reworked to be able to traverse Shadow DOM via open shadow roots, but it requires significant engineering effort.

Do you know to what extent Shadow DOM is being used in this way? Have I misunderstood?

It's a good question. For right now, there is no other style scoping mechanism provided by the web platform. If you want style scoping, you either roll it yourself, or you use Shadow DOM. I agree that Shadow DOM includes much more than just style encapsulation, and as you mentioned, that has other consequences. But for now, it's all we have. There is a project underway to propose and spec a light-DOM style scoping mechanism, but that is likely a ways off. We haven't even published that proposal yet. In the meantime, Shadow DOM is the style scoping mechanism we have. I would argue that the consequences you list above are ones that we should try to fix anyway. They represent shortcomings of tooling w.r.t. Shadow DOM in general. No?

Even if they want to use Web Components, they can't, because SSR is seen as a hard requirement.

To what extent is this based on a (possibly outdated) perception of how search engines work, vs a requirement for rapid rendering on the client?

Sorry for all the meta-discussion, but I'd really like to fully understand the context for this feature before trying to form a solid opinion on the design.

It is possible that part of this requirement is due to out-dated perceptions of search engines. However:

  1. this perception seems to be quite widely held
  2. search engine behavior is: a. Not well published, on purpose b. Hugely important to commerce on the web.

Primarily for reason #2 above, I can completely understand the ongoing hard requirement to use SSR. Even knowing what I know, if I was building an external site, I would likely reach the same conclusion.

alice commented 4 years ago

Thanks for the continued responses! We discussed this a little in the breakout today, and will pick it up again at the "face to face" in a couple of weeks. I think this probably gives us enough context to read through the proposal in depth.

mfreed7 commented 4 years ago

That sounds great, thanks!

hober commented 4 years ago

@Alice and I took at look at this during the TAG F2F this week, and we noticed a few things. Here's the first:

I'm surprised and concerned that this enables the serialization of closed shadow trees by code outside the shadow tree. Doesn't that defeat the purpose of having closed shadow trees in the first place? I suppose you might want such a feature for a browser's "save this page" feature, or maybe for server-side code, but perhaps that would be better served by an API that's not available to page authors.

hober commented 4 years ago

delegatesFocus probably won't be the last option that gets added to ShadowRootInit, and I think this proposal implies that we'd need to add a shadowrootfoo="" attribute to <template> for every boolean foo we add to ShadowRootInit in the future. That seems kinda messy.

Instead, could we maybe have a single shadowrootoptions content attribute containing a DOMTokenList representing these boolean flags (e.g. shadowrootoptions="delegates-focus future-thing-1 future-thing-2")?

alice commented 4 years ago

Regarding the timing of attachment, we noted that the primary reason for attaching the shadow root when the closing <template> tag was parsed was due to implementation complexity.

This is explicitly counter to the reasoning in the priority of constituencies.

I understand that a non-zero number of pages are of the form

<my-app>
  #shadowroot
    <rest>
       <of>
          <the>
             <app>
</my-app>

For these pages, it seems like the entire page would need to be parsed before any content would show up. Is that right?

Is there any chance of revisiting this decision?

alice commented 4 years ago

With the proposal to use an idref to an existing template, it seems like this would address some of the concerns we had about this feature pointing to a missing "declarative custom element" feature.

@justinfagnani has pointed out to me that reusable elements are a form of compression, although it's true that compression is also a form of compression!

Regarding the technical issues:

alice commented 4 years ago

One meta-comment: we love how thorough and yet succinct and readable this explainer is!

Just one suggestion: please consider adding a table of contents using doctoc as recommended in our Explainer Explainer.

mfreed7 commented 4 years ago

@alice and @hober, thanks for the feedback! See below for some responses.

I'm surprised and concerned that this enables the serialization of closed shadow trees by code outside the shadow tree. Doesn't that defeat the purpose of having closed shadow trees in the first place? I suppose you might want such a feature for a browser's "save this page" feature, or maybe for server-side code, but perhaps that would be better served by an API that's not available to page authors.

I would be concerned about this also... but I don't agree that the proposal allows this. Note that serialization of closed shadow roots is only allowed if the closed roots themselves are explicitly passed in. So unless the code already has access, the closed shadow root is not serialized. Does this clarify the situation, or did I misunderstand your concern here?

delegatesFocus probably won't be the last option that gets added to ShadowRootInit, and I think this proposal implies that we'd need to add a shadowrootfoo="" attribute to <template> for every boolean foo we add to ShadowRootInit in the future. That seems kinda messy.

Instead, could we maybe have a single shadowrootoptions content attribute containing a DOMTokenList representing these boolean flags (e.g. shadowrootoptions="delegates-focus future-thing-1 future-thing-2")?

I completely agree that this is "ugly". I did think about a DOMTokenList-like solution here, but the problem is non-boolean parameters. For example, the next such attribute I can see coming is the 'slotAssignment' parameter in the Imperative Shadow DOM Distribution API, which isn't exactly binary. In that case, it likely could be made binary, but there are no guarantees about future parameters. In a world where some parameters take values, would you still prefer having some be part of a DOMTokenList and others be standalone attributes? Are there other precendents in the Web Platform that we can draw inspiration from? I'm really open to suggestions here.

Regarding the timing of attachment, we noted that the primary reason for attaching the shadow root when the closing <template> tag was parsed was due to implementation complexity.

This is explicitly counter to the reasoning in the priority of constituencies.

I understand that a non-zero number of pages are of the form

<my-app>
  #shadowroot
    <rest>
       <of>
          <the>
             <app>
</my-app>

For these pages, it seems like the entire page would need to be parsed before any content would show up. Is that right?

Is there any chance of revisiting this decision?

You are correct that the rendering will happen only at the closing tag of the outer-most <template shadowroot> tag, which precludes "streaming". And this is unfortunate.

Nominally, I agree that changing this API to be less powerful, and correspondingly less complex, runs counter to the priority of constituencies. However, there is a pragmatic interpretation of the priority of constituencies here: if the implementers are unwilling to embark on adding this feature because they believe the effort required is not justified, then users and authors interests haven't really been prioritized, since they don't get the new feature at all. In other words, that would be making the perfect the enemy of the good. In this case, I believe there is standalone value to the non-streaming version of declarative Shadow DOM, and if the demand for the "streaming version" proves to be worth the effort, then that can always be added as a follow-on feature. For example: <template shadowroot=open streaming>. Would you agree?

With the proposal to use an idref to an existing template, it seems like this would address some of the concerns we had about this feature pointing to a missing "declarative custom element" feature.

@justinfagnani has pointed out to me that reusable elements are a form of compression, although it's true that compression is also a form of compression!

I do agree that this, also, is a good upgrade for this feature. However, as you know, there are still real problems with a general id-ref based solution of any kind, and those would need to be solved first. Again here, I think there is a natural expansion path for "shared" declarative shadow DOM with something like <template shadowroot=open shadowtemplate=#magic-global-id>.

One point I want to make here - I really believe this should be declarative Shadow DOM and not tied to custom elements. I'm not sure if that's what you were suggesting, but I think we have separate features (Shadow DOM and Custom Elements) and we should try to avoid requiring one to use the other.

Regarding the technical issues:

  • It seems that reasonable answers could be found to the question of what happens if <template> is defined after the <custom-element> (particularly since this would be poor practice, like having two <template shadowroot>s inside a single host element.

Agreed.

  • Regarding IDREFs piercing shadow boundaries, this seems like a question that badly needs a solution. We have been working on this for AOM, with one strawman proposal being to introduce a globalid attribute which can be used for shadow-piercing references (but not used in a getElementById() lookup). This would also address your comment about AOM IDL attributes in the "Other unanswered questions" setction.

I have been following this discussion, and I do like the direction it is heading. But as you know, there are still unanswered questions, and I'd like to avoid holding up this proposal on a full resolution of the idref problem. Pragmatically, how close to agreement do you think the 'globalid' attribute is?

  • The question about "how often this would be helpful" seems like one that is worth answering in earnest - how often is shadow DOM contents strictly boilerplate, with stateful content slotted in, vs encapsulating state as well as structure?

This is a good question, for which I don't have a great answer. I'm not sure what the best way to go about trying to find out - suggestions?

One meta-comment: we love how thorough and yet succinct and readable this explainer is!

Just one suggestion: please consider adding a table of contents using doctoc as recommended in our Explainer Explainer.

Glad you liked it! Yes, I can definitely add a TOC - will do.

Thanks, Mason

alice commented 4 years ago

@plinss, @hober, @atanassov, @dbaron and I discussed this at our breakout this week (will link to minutes once published; that will happen after the plenary meeting tomorrow).

We had a longer discussion on the subject of supporting streaming shadow root contents.

It does seem like this would be a preferable default option, if it wasn't for the implementation cost, so it would be good to understand just how significant the implementation cost would be.

Specifically, not making streaming the default option now means that if it does get added in the future, it will be an additional implementation burden on potential future implementations to support the flag, and on authors to enable it. (We did note, however, that in practice that is unlikely to be very many authors, since this would probably be done by server-side library code.) Adding streaming as a default later on didn't seem feasible, since it would need to be feature-detected.

We also thought it might be worth exploring whether there might be a way to compromise to get at least some of the user experience benefit of streaming without needing to solve all of the complex implementation issues.

mfreed7 commented 4 years ago

@plinss, @hober, @atanassov, @dbaron and I discussed this at our breakout this week (will link to minutes once published; that will happen after the plenary meeting tomorrow).

Thanks for the feedback! Responses below.

  • We agree that reusable templates would be a good upgrade, and that there's no need to block on having a full solution available there, since it could be layered in later.

Great news, thanks.

  • We didn't have a good idea of how to research how often Shadow DOM content would vary between instantiations of custom elements at the point of serialization - perhaps this is something that could be investigated as part of an origin trial or similar?

Yes, I was curious about this as well, since I only heard it anecdotally while developing this feature. I will make it a point of any origin trials to try to gather more data here.

  • We read the additional explanation of how innerHTML works with closed shadow roots which you linked to - it does answer Tess' question, and we would suggest integrating that directly with the Serialization section rather than having it as a separate subsection in a totally different section of the explainer (even if it does happen to be immediately afterwards).

Good suggestion! Done. It looks similar, but I've moved the two sections ("Closed shadow roots" and "Additional arguments for attachShadow") out of the "Other Considerations" section up into the "Serialization" section proper.

  • We discussed the idea of combining the arguments to attachShadow() into a single content attribute, and agreed that it's probably not a good idea (and premature in any case) to try and design some kind of microsyntax to serialize these into a single attribute. If in future we do end up with many shadowrootdelegatesfocus-type attribute, we could investigate what type of simplification might make sense at that point.

Sounds good, thanks.

We had a longer discussion on the subject of supporting streaming shadow root contents.

It does seem like this would be a preferable default option, if it wasn't for the implementation cost, so it would be good to understand just how significant the implementation cost would be.

I agree here - streaming would be preferrable. The problem is (I think) that the implementation cost is seen (e.g. here) as being due to security risks and bugs. I think the difference in "pure" implementation cost is actually quite similar for the opening vs. closing tag. It is just that having a live shadow root document into which elements are parsed is new, and opens the usual Web can of worms of possibilities for what can happen. I do empathize with that concern. This proposal (non-streaming) gets the syntax, the use case, and some data up and running, and from there we can hopefully expand to a streaming version later.

Specifically, not making streaming the default option now means that if it does get added in the future, it will be an additional implementation burden on potential future implementations to support the flag, and on authors to enable it. (We did note, however, that in practice that is unlikely to be very many authors, since this would probably be done by server-side library code.) Adding streaming as a default later on didn't seem feasible, since it would need to be feature-detected.

I agree that if we add a streaming option (e.g. <template shadowroot=open streaming>) later, this will require some change to the content, e.g. to add the "streaming" attribute. As you point out, that is likely to be easy to change, due to the use of server-side library code. On the implementation side, I don't actually think it will be much of an implementation burden. The code to implement the existing (non-streaming) version of this feature is relatively small, and much of it could likely be shared by both versions. But that, of course, remains to be seen. From your comment, it sounds like you're reluctantly ok with this choice - please correct me if I'm wrong.

We also thought it might be worth exploring whether there might be a way to compromise to get at least some of the user experience benefit of streaming without needing to solve all of the complex implementation issues.

  • For example, could you defer only script execution until all shadow content has parsed, and then later add an option to execute scripts as they stream?

I would love to do this, but as I mentioned above, I think the devil (and implementation cost) is in the details. As far as I understand it (and I'm open to pointers here!), we can either create the shadow root upon the opening or closing tag. Full stop. The current proposal uses the closing tag, because it can then re-use the (debugged, working) machinery for <template> as-is without modification. If we move to the opening tag, the can of worms is now open, and any/all security bugs are possible. I agree that deferring scripts might alleviate some of the risks, but even implementing that script deferal needs to be done carefully, and thoroughly. For example, what happens if script moves an existing <script> node into the currently-being-parsed #shadowroot - should it execute?

Let me know if you have suggestions here - I'd love to get a streaming version implemented that will get multi-implementer support. As-is, I still have no indications of support from the other engines, and that is without adding more complexity.

Westbrook commented 4 years ago

@mfreed7 this is very exciting to see coming together!

Two things come to mind in reviewing the explainer that I'd love your thoughts on...

Would you be open to comparing an additional baseline or maybe a parallel set of baselines? In particular, I experience using shadowRoots predominately with custom elements as seen in the syntax of your proposed solution. However, without a declarative approach currently these shadowRoots are created by a single element upgrade process. It's very likely that I misunderstand the custom element upgrade process to perform differently that your current array of tests, but it would seem from the outside to amount to fewer bytes over the wire (for the reduction of duplicated <template> elements) and likely faster (for being closer to the metal of the browser). Whatever the results, I would see coverage of this comparison as a nice addition to your explanation and further clarity as to the correct path forward in relation to some of the alternatives you've already listed.

What thoughs have you follow up on relative to the complexity ceiling of this approach? For instance the relatively benign shadowRoot that you rely on in your example is both low cost of duplicate across multiple instances of an element with a shadowRoot and doesn't appear to point to the possibility of more complex elements relying on elements with shadowRoots being a part of their template and/or light DOM. Would you see this approach finding or needing any technical limitations when usage progresses in that direction?

Thanks in advance, looking forward to future possibilities here!

plinss commented 4 years ago

@mfreed We accept the additional implementation cost of building streaming support, are while we would prefer it to be streaming form day one, we understand that the initial implementation will likely not support it, or even that future implementations may have difficulty implementing streaming at all.

What we'd like to see more consideration of, is can this be designed such that streaming is presumed, and implementations can opt-out? The goal being that a user can write their code once, presuming streaming is available, and get the benefits of a streaming implementation should it be available, but not break should the browser not support streaming. This would be more friendly to progressive enhancement without authors having to develop both streaming and non-streaming versions of their code.

As an example (pure strawman), could there be an async api for fetching the shadowRoot, such that a streaming implementation returns it on the opening tag, and further content may come in later (perhaps another async method to return when a given shadowRoot is fully loaded), and that non-streaming implementations simply don't return the shadowRoot until the close tag (and they always return a fully loaded shadowRoot, or perhaps return an empty shadowRoot and then immediately fill it so all code reacts like streaming is happening).

mfreed7 commented 4 years ago

What we'd like to see more consideration of, is can this be designed such that streaming is presumed, and implementations can opt-out? The goal being that a user can write their code once, presuming streaming is available, and get the benefits of a streaming implementation should it be available, but not break should the browser not support streaming. This would be more friendly to progressive enhancement without authors having to develop both streaming and non-streaming versions of their code.

Thanks for the suggestion - this is a great idea! If done right, this would allow implementations that do not wish to shoulder the additional burden of supporting streaming not to do so, while still supporting the more basic non-streaming version, and therefore the feature in general.

As an example (pure strawman), could there be an async api for fetching the shadowRoot, such that a streaming implementation returns it on the opening tag, and further content may come in later (perhaps another async method to return when a given shadowRoot is fully loaded), and that non-streaming implementations simply don't return the shadowRoot until the close tag (and they always return a fully loaded shadowRoot, or perhaps return an empty shadowRoot and then immediately fill it so all code reacts like streaming is happening).

So right now this proposal does not specify any events for shadowRoot attachment, even in the non-streaming case. We've been trying to keep that problem (the "the parser finished / children changed problem") separate from this one, just to avoid hanging this proposal up on finding a solution to that problem. However, I definitely think we should tackle that problem in a general way. Would you be ok if we just made a change here something like this:


<template shadowroot="open" do-not-stream>

The do-not-stream attribute, if present, indicates that the shadow root should only be attached when the parser encounters the closing </template> tag, and not before. If this attribute is not present, then the implementation may or may not attach the shadow root upon the opening <template shadowroot> tag, and begin constructing shadow content in place in the #shadowroot, rather than waiting for the closing tag. In other words, the support of "streaming" is an optional feature of the rendering engine, and many not be supported by all engines.


With such a definition, and even absent the events being defined, I think this is a very usable API. Custom elements can co-exist with SSR Shadow DOM by ensuring that the SSR content comes before any async script tags. Later, when issue 809 is solved, the implementations can get more creative and detect the completion (or start) of shadow root parsing.

What do you think?

kenchris commented 4 years ago

Looking at this with @hober, I was trying to understand what was the major fear that people have with not supporting streaming. It seems to be that there is fear that web sites that are 100% bought into web components in the sense that even their app root element is a custom element, might loose incremental rendering.

As I understand you already have an experimental implementation, so I am wondering if that is really the case? I also believe it would be great to have this discussion as part of the explainer.

mfreed7 commented 4 years ago

Looking at this with @hober, I was trying to understand what was the major fear that people have with not supporting streaming. It seems to be that there is fear that web sites that are 100% bought into web components in the sense that even their app root element is a custom element, might loose incremental rendering.

Thanks for the comment! Developers want streaming (e.g. comment), to improve FCP and similar metrics. As you mentioned, the extreme case of that is a <my-app>{everything else}</my-app> type site, where nothing will be rendered until the closing </my-app> tag is parsed. But even absent that, for large individual components, there might still be an FCP hit if streaming is not supported.

Having said that, I think the streaming ship has sailed, as there is strong pushback by WebKit on streaming support. And Gecko has been mostly quiet about the feature, except to say they wouldn't support a "streaming-optional" implementation. So in order to gain 2-implementer support, I'm punting the streaming feature.

As I understand you already have an experimental implementation, so I am wondering if that is really the case? I also believe it would be great to have this discussion as part of the explainer.

Chromium does have an experimental implementation, but we don't have data on this aspect yet. It might be difficult to gather such data, as the feature doesn't support streaming. There is a brief discussion of this aspect of the design in the explainer, here.

alice commented 4 years ago

Looking through the linked discussion, it's not clear to me that WebKit is pushing back on streaming, so much as having streaming be optional to implement; it looks like Mozilla is making similar arguments, as you summarised.

Since we only really had feedback on the streaming feature, and that discussion seems to be ongoing among the relevant stakeholders, it seems like we can probably close up this review, unless you had any other questions for us to think about.

mfreed7 commented 4 years ago

Looking through the linked discussion, it's not clear to me that WebKit is pushing back on streaming, so much as having streaming be optional to implement; it looks like Mozilla is making similar arguments, as you summarised.

You're right that in the current discussion, the streaming-related pushback has been on optional streaming, which is why I punted on that option. But the prior decision in 2018, to not to move forward with declarative Shadow DOM, was mostly predicated on the difficulty and security issues inherent in a streaming solution. I wrote up a summary of that discussion in the explainer, here. It was this specific prior discussion that motivated me to pursue the non-streaming solution when I revived declarative Shadow DOM in 2020, in the hopes of getting multi-implementer support.

Since we only really had feedback on the streaming feature, and that discussion seems to be ongoing among the relevant stakeholders, it seems like we can probably close up this review, unless you had any other questions for us to think about.

Only one other issue has recently come up: a potential sanitizer bypass using declarative Shadow DOM. I have written up a summary of the issue and added it to the explainer. (I've also posted about this in the issue discussion, and reached out to sanitizer libraries.) I believe this, like other sanitizer bypasses, is best handled by the sanitizer libraries themselves, which already need to issue frequent updates to keep on top of security issues. But if you have any input on ways to mitigate this issue from an API perspective, I'd be very interested to hear your input. From my perspective, the issue seems fairly fundamental to any declarative Shadow DOM solution that allows closed shadow roots, mixed with any sanitizer library that allows the return of live DOM instead of string HTML. But thoughts appreciated!

If there's no input on the above issue, I do think we can close this TAG review. I really appreciate all of the feedback and help here!

hober commented 3 years ago

If you have any input on ways to mitigate this issue from an API perspective, I'd be very interested to hear your input.

I think you've already identified the key mitigation—to disable Declarative Shadow DOM from the fragment parser.

From my perspective, the issue seems fairly fundamental to any declarative Shadow DOM solution that allows closed shadow roots, mixed with any sanitizer library that allows the return of live DOM instead of string HTML.

Yup.

If there's no input on the above issue, I do think we can close this TAG review.

Okay, will do.

I really appreciate all of the feedback and help here!

Thanks for all your hard work on this & other features!

mfreed7 commented 3 years ago

I really appreciate all of the feedback and help here!

Thanks for all your hard work on this & other features!

Thanks very much!