Open pelme opened 3 months ago
just a drive by comment: this is cool - would love to use this in a Starlette app. i will build some prototypes off this branch to get a feel for it, but i think asyncio support would be very useful for me.
@raisjn cool, thanks for dropping the comment!
I feel like everyone should use streaming (sync or async) for improved user experience and loading times with minimal effort. It is truly a underused thing in current web development that could give a nice boost in web performance. I am also thinking of adding a Starlette HtpyResponse/Django HtpyResponse/Flask HtpyResponse classes etc which is based on the respective StreamingResponse classes. Just to lower the friction and make it as straightforward as possible to use streaming. Now "streaming" is kind of "opt-in" if you use the StreamingResponse classes etc, I would like to make it easier+make the main path to using htpy be streaming by default.
I think this PR is pretty much in a good shape of being merged. I have re-written the implementation/tests in multiple times and think it is pretty solid/well tested at this point. Any review/feedback on the implementation would be very welcome though! I have not merged it yet because a) I am not using async myself in my day-to-day project and b) there have not been any visible feedback/interest in it yet.
@raisjn if you would build some prototype and play with it and get back with feedback that would be very valuable and we could get this going! 🙂
i'm still playing with htpy (cool!) and async iteration. the code looks fine to me, but will report back after more days (weeks?) of testing and trying to build the below functionality:
comments on async delivery:
from what i can tell, it resolves the work in sequential order. what i am aiming for is pipelined rendering (also known as partial pre-rendering in next.js). I believe pipelined delivery can be built on top of (or alongside) htpy's async rendering by using some JS + an async queue.
it would look something like this:
div[
Placeholder(do_some_work()),
Placeholder(do_some_work())
]
Placeholder
would emit a placeholder div that will be later filled with the actual async work when it finishes. this seems like it is straightforward for me to implement with the streaming functionality you've built (so thank you!)
Interesting! I think just sending CSS/JS before the full page is rendered gives a good boost for free for anyone. Streaming multiple parts of the page and then recombining it with javascript sounds interesting! It feels like there should be a small lib for that. It feels simple in theory! 😆
I think just sending CSS/JS before the full page is rendered gives a good boost for free for anyone
i agree - this is a great boost! i'm not 100% sure using async iter
is a good answer for this functionality as it would end up being a hidden implementation detail instead of an explicit consideration by devs: consider adding a special function like flush()
.
html[
head[
link[...]
script[...]
],
flush(),
body[
div[foobar]
],
flush(),
some_extra_work()
the exact API doesn't matter as long as it is explicit, it could also be a special tag (early_flush[some_html_code]
).
why would a explicit flush function/tag be useful? currently with this PR, everything will be "flushed" as soon as it is ready anyways.
coming back to this: it turns out i don't need async iterators in htpy, have architected an app with async delivery that allows parallel pipelining instead of sequential and uses htpy without async. in my first reading of this PR, i had assumed async iter was in parallel. (collect all async in one pass, then await them in parallel, then do next set, etc).
re: why use explicit flush: i think it would help for people who aren't aware of the mechanics of rendering and using async generator vs sync generator.
This PR adds the possibility to use async awaitables/iterators/generators to generate a response. A sample Starlette example is added too. Thoughts? Ideas?