Open jakearchibald opened 7 years ago
Parts of this can be hacked using document.write
and <iframe>
, but it'd be good to have a non-hacky way.
Code for the above use-case if element.writable
provided a way to stream HTML into the element.
const article = document.createElement('article');
const response = await fetch('article.include');
const articleHasContent = new Promise(resolve => {
const observer = new MutationObserver(() => {
observer.disconnect();
resolve();
});
observer.observe(article, {childList: true});
});
response.body
.pipeThrough(new TextDecoder())
.pipeTo(article.writable);
await articleHasContent;
performTransition();
There is also https://w3c.github.io/DOM-Parsing/#idl-def-range-createcontextualfragment(domstring) which does execute scripts (when inserted into a document). As a possible alternative to document.write
in the meantime...
TIL! I guess you mean an alternative to innerHTML
?
No, as an alternative to document.write
in your hack. Don't even need an iframe, just a Range
instance.
http://software.hixie.ch/utilities/js/live-dom-viewer/saved/4716
@zcorpan ah, that doesn't allow partial trees http://software.hixie.ch/utilities/js/live-dom-viewer/?saved=4717, which you kinda need if you're trying to stream something like an article.
Hello! I'm curious about the possibility to insert scripts.
I'm trying to execute an script and I see the script is executed after insertion.
<!DOCTYPE html>
<body>
<script>
var r = new Range();
// Write some more content - this should be done async:
document.body.appendChild(r.createContextualFragment('<p>hello'));
document.body.appendChild(r.createContextualFragment(' world</p>'));
document.body.appendChild(r.createContextualFragment('<script>console.log("yeap");<\/script>'));
// done!!
</script>
</body>
I've two questions: What about if the inserted script has the flag... async or defer? Those flags will not have effect.
Will you keep this function in the standard? Or are you planning to remove it?
Finally I found (you gave me) the best way to insert content to the document.Thanks a lot!
@rianby64 note that the example above creates two paragraphs rather than one.
What about if the inserted script has the flag... async or defer? Those flags will not have effect.
The scripts will be async, as if you'd created them with document.createElement('script')
. For the streaming solution I mentioned in the OP, I'd like the parser to queue DOM modifications while a non-async/defer script downloads and executes, but allow something like a look-ahead parser.
Will you keep this function in the standars?
Which function? createContextualFragment
? I don't see why it'd be removed.
OK. Thanks a lot again.
ah, that doesn't allow partial trees
Indeed.
In general I think we want to be able to provide ReadableStream or Response objects to APIs that currently take a URL. @jakearchibald, would something that let you assign a ReadableStream or Response (backed by a stream) to an iframe.src satisfy your use case?
The key part here is to not have to use a separate iframe plus adoption of the current parser insertion point into a different document. Instead, we just want to parse into an existing document location.
This means creating an element that has the concept of partially loaded state, right? An iframe already has all of that, but do other html container elements? So wouldn't we need to create something that has all the load event, error event, and other stateful information of an iframe? Or maybe all that exists today. HTML always catches me out.
@wanderview
This means creating an element that has the concept of partially loaded state, right?
I think we can get away without this. If an element has a writable endpoint you'll get locking on that writable for free. However, during streaming you'll be able to modify the children of the element, even set the element's innerHTML
. The HTML parser already has to deal with this during page load, so I don't think we need to do anything different.
So wouldn't we need to create something that has all the load event, error event, and other stateful information of an iframe?
We probably don't need this either. htmlStream.pipeTo(div.writable)
- since pipeTo
already returns a promise you can use that for success/failure info.
How would this interact / compare with the following scenario:
Rather than fetching HTML snippets from the server, I'm much more likely to be able to fetch [we'll assume newline-delimited to enable stream parsing] a minimal JSON encoding of whatever entity I'm trying to display.
Partially, this is just down to the fact that most web servers wrap HTML output in a series of filters, one of which is a base "
... so, assuming that we use JSON, Is there a performance win to being able to render JSON snippets [as they come over the network] to HTML? The trade-off I'd assume we're making is on triggering additional layouts; put another way, is it faster to do:
while (nextJSONitem) { JSON -> HTML -> DOM }
My expectation is that the answer is "it depends"; I don't have a sufficiently reliable playground for testing this to any degree of accuracy, but I would expect we'd want to keep the render pipeline as unobtrusive as possible while minimizing network->screen latency for individual items, using the following as trade-offs:
time to render & re-compute layout for:
total time to:
... ideally all while minimizing client complexity ("they wrote a lot of code to make things that slow"). Thankfully that part should be hidden in frameworks.
... OR am I totally barking up the wrong tree with the idea that JSON is the right delivery mechanism, and we should aim to generate server-side HTML snippets for pretty much anything that can be fetched-with-latency?
@blaine I think I cover what you're asking over at https://jakearchibald.com/2016/fun-hacks-faster-content/
@jakearchibald It kind of feels like there should be a way for code other than the one writing to the element to know if it's complete. The pipeTo promise, while useful, does not seem adequate for that.
For example, code that uses a query selector to get an element and operate on it should have some way to know if the element is in a good state. Seems like that kind of code is usually pretty independent.
response.body
.pipeThrough(new TextDecoder())
.pipeTo(article.writable);
Would indeed be a big win! ❤️
@jakearchibald durr. I'd read that a few days ago and forgotten the second part of your post in this context. Sorry, I blame lack of coffee. ;-)
Re-reading this more carefully, the element.writable pipe makes a ton of sense, and it'd be trivial for a rendering pipeline to make use of it, even in the JSON case. +1
Wait, how would the element.writable
getter even work, since a WritableStream usually (bar explicitly passing 'preventClose') can only be pipeTo
'd once, after which it becomes closed and can't be written to again?
htmlStream.pipeTo(div.writable).then(() => htmlStream2.pipeTo(div.writable) /* cancels source stream and does nothing? */);
What happens when it's already locked to a previous, still incomplete, still streaming request but you changed your mind/ the user clicked to the next article already?
htmlStream.pipeTo(div.writable); // locked
htmlStream2.pipeTo(div.writable); // doesn't work, stuck waiting?
Would it have to produce a new fresh WritableStream on every access? Then every access would have to instantly invalidate all the previous writable streams so that writing to them does nothing, and only the latest effects the element's contents?
@jakearchibald I'm curious how you respond to @isonmad's comment; it seems like a valid argument against a WritableStream here. And of course the lack of cancelable promises is hurting us here...
Yeah, this seems like a good argument against element.writable
and for something like:
htmlStream.pipeTo(div.getWritable());
or
const domStreamer = new DOMStreamer();
div.appendChild(domStreamer);
htmlStream.pipeTo(domStreamer.writable);
What happens when it's already locked to a previous, still incomplete, still streaming request but you changed your mind/ the user clicked to the next article already?
This could be done with domStreamer.abort()
or somesuch, but maybe it's a more general problem to solve - how to abort a pipe.
Would it have to produce a new fresh WritableStream on every access? Then every access would have to instantly invalidate all the previous writable streams so that writing to them does nothing, and only the latest effects the element's contents?
Taking the above models, would it be bad to allow two streams to operate within the same element? Sure you could get interleaving, but that's already true with two bits of code calling appendChild
asynchronously.
The browser already has to cope with the html parser and appendChild
operating on the same element, so it doesn't feel like anything new.
Any idea what the status of this is?
Use-case: I work on a news site and I want to create visual transitions between articles, but I don't want to lose the benefits of streaming. So:
Not only is
innerHTML
is slower way to do this (due to a lack of streaming), it also introduces a number of behavioural differences. It'd be great to try to limit these, eg allow inline scripts to execute before additional elements are inserted.