withastro / roadmap

Ideas, suggestions, and formal RFC proposals for the Astro project.
290 stars 29 forks source link

Server islands #945

Closed matthewp closed 2 months ago

matthewp commented 2 months ago

Summary

Allow islands within a prerendered page to be server-rendered at runtime.

Background & Motivation

Often a page may be mostly static, but with only parts that need to be rendered on-demand. For example, a product page might need to show stock levels or personalised recommendations. Currently the only option for these is to either render the whole page on demand, or to render the dynamic parts on the client. This proposal introduces the concept of deferred islands, which are not prerendered, but rather server rendered on demand at runtime.

Next.js is working on a solution called partial pre-rendering, which allows most of the page to be prerendered, with individual postponed parts rendered using SSR on-demand. The implementation is quite different from what I propose for Astro, but the concept is similar.

Goals

Goals

Possible goals

Non-goals

Example

A component would be deferred by setting the server:defer directive.

<Like server:defer />

The "fallback" slot can be used to specify a placeholder that is pre-rendered and displayed while the component is loading.

<Avatar server:defer>
    <div slot="fallback">Guest</div>
</Avatar>

The page can pass props to the component like normal, and these are available when rendering the component:

---
import Like from "../components/Like";

export const prerender = true;

const post = await getPost(Astro.params.slug)
---
<Like server:defer post={post.id} />

The component itself does not need to do anything to support deferred rendering, so it should work with any existing component. However deferred components can optionally use special powers, and can detect if they were deferred by checking the Astro.deferred prop. This means that it was deferred at build time but is now being rendered on-demand.

The special powers are because during deferred rendering, a component is rendered like a mini page. This means it can use features such as Astro.cookies, and setting headers on Astro.response. The Astro.url and Astro.request.url are from the original page, and are passed in the request along with the props.

---
// Like.astro

const { post } = Astro.props

let user = { name: "Guest" }

// If this is a deferred render then we may have a user's cookie
if (Astro.deferred) {
    user = await getUser(Astro.cookies.get('session'))
}
---

<div>
    <span class="name">{user?.name}</span>
</div>

Implementation

When rendering the static page, postponed elements would not be rendered, and instead would render an <astro-island> containing any placeholder. The <astro-island> would embed the serialized props, as well as the URL for the deferred endpoint.

When the page has loaded, a request would be made to each deferred endpoint (see "GET vs POST" below for considerations). This request would pass all of the props and other serialized context.

On the server, the component would be rendered effectively in a thin wrapper page that decoded and forwarded the props, and rewrites the Astro global values.

When the browser has loaded the response from the endpoint, it would use it to replace the content of the island.

The runtime for replacing the deferred islands would be in an inline script tag. A simplified version without error handling could look like this:

document.addEventListener("DOMContentLoaded", () => {
    document.querySelectorAll("astro-island[defer]").forEach(async (element) => {
        const props = JSON.parse(element.getAttribute('props'));
        const endpoint = element.getAttribute('component-endpoint');
        const response = await fetch(endpoint, {
            method: "POST",
            body: JSON.stringify({ props, url: document.location.href })
        }).then(res => res.text());
        const range = document.createRange()
        range.selectNodeContents(element)
        const fragment = range.createContextualFragment(response)
        range.deleteContents()
        range.insertNode(fragment)
    })
})

GET vs POST

One of the unanswered questions is whether to use GET or POST requests for the deferred component endpoints. The benefit of the POST is that it can send arbitrarily large request bodies. The benefits of GET are that they are cacheable and can be preloaded in the page head. Some options are:

Why not streaming?

A alternative approach would be to send the postponed components in the same response as the initial shell. This is how Next.js PPR is currently implemented. This has some benefits, but I think they are outweighed by drawbacks in most cases. Astro has always been static-first, and I think that approach is best here too.

Primarily, a prerendered, static page is easily cacheable, both in the browser but also in a CDN. This is not the case when the deferred data is in the same response. The benefits of this are that the static part can be cached at the edge, near to users, with a very fast response time. The deferred content can be rendered and served near to the site's data without blocking rendering of the rest of the page. If you want to stream the deferred content in the same response, you either have to render everything at the origin and take the hit on distance from users, or render it all at the edge and take the hit on distance from your data. In some cases rendering everything at the edge is fine (e.g. if there's no central data source or API access), and Astro already supports that.

You can work around this with logic at the edge to combine a local cached shell and a stream of the updates from the origin, and edge middleware to do this could be a helpful option. It still prevents the use of the browser cache though, because it can't make conditional requests for the prerendered page - the whole things needs to be sent on every request in case the deferred data has changed.

Tc-001 commented 2 months ago

(Friday evening, sorry if a bit rambly 😅)

Overall looks good, am happy this is being tackled!

One issue I have with the islands being a separate request is prop validation. As opposed to Astro.request.params I would guess that most people implicitly trust the props given to a component. If not validated, for most people it would probably be just a DOS vector at worst, but it could lead to an XSS, SQL injection or some pretty scary stuff (ex. <RenderFileAsMD path={} server:defer /> or god forbid somehow gets combined with astro-command)

I am unsure if there is an elegant solution to this.

Simplest could be to document that you can't trust the props and make sure you are handling them accordingly, but personally that feels like a footgun that someone will inevitably trigger

A band-aid-y feeling solution could be to sign the props so that when the server gets the island request it can verify it is "intended". But now you need the dev to define a secret key, or somehow store a session of it.

Another could be that the islands are part of the same response and are swapped in with some simple js (It was addressed here as not being ideal, but I think it is better than passing untrusted props straight into user code)


If I was to implement it, I would go with somehow signing the props. It would be quite weird and inconvenient so I hope there is a better solution that I am missing.

jkhaui commented 2 months ago

Just wanted to add my 2 cents - I think this is a fantastic initiative! I do really agree with the sentiment Next/Vercel is pushing (something along the lines of "PPR will be the default rendering model for modern websites/web apps").

Coincidentally, I've been deeply researching this area over the last few days in an attempt to handroll my own PPR (or whatever you want to call it) with Astro. Based on this I wanted to share some general thoughts/findings:

  1. My research led me to believe the only way I could achieve granular hybrid static/dynamic caching within a single route/page was with a technology that far precedes my time as a web developer: edge-side includes (ESI). I guess ESI is the closest "platform-native" way to achieve PPR by directly embedding XML markup as comments inside a static HTML page, but it's probably not the solution here as it seems to be vendor-dependent; i.e. one's hosting/CDN provider has to be ESI-compliant.

However, thought I'd still bring it up because there's likely some great lessons to be found in the spec or in example implementation guides, such as this one from CloudFlare Workers: https://blog.cloudflare.com/edge-side-includes-with-cloudflare-workers. On a related note, Vercel is exposing their platform-specific APIs for any framework to use their PPR model (see screenshot below). Whether or not Astro uses it, it could similarly provide valuable insight. Unless streaming is completely ruled out from Astro's implementation.

image
  1. Server-side caching isn't my area of expertise so apologies if anything I say below is incorrect. But of the implementation options presented, I think it must be this:
    • allow users to opt-in to cacheable contents. This would be more flexible and potentially allow the use of default cache headers. However it could be hard to teach, and would require builds to be failed if the props get too long.

My reasoning is because while there could be some default caching behaviour, surely the developer should have the final say on what is/isn't cached?

Let me use my own real-world use-case to explain. I'm building an Astro web app which is highly dynamic in 2 dimensions:

Regarding the first point: if the user visits from desktop, a desktop version of the site is shown (such as the web app shell consisting of a sidebar navigation menu and top navbar). If the user visits from an iOS device, they see an iOS-themed mobile version of the app shell, e.g. with bottom tabbed navigation instead of a sidebar. And similarly, Android users will be served a material design-themed version of the mobile shell.

The key point here is that each of these 3 "shells" should be fully cacheable and the cached version served to users based on what device they're viewing from as there's no personalised UI being served). But if I understand correctly, I'd still need these parts to be dynamically server-rendered because I need access to the user-agent headers in the request. A similar case I can think of is having a fixed number of versions of your webpage based on what location a user visits from (e.g. show Component A for visitors from Asia, Component B for European visitors, etc.).

So in these situations, I would need some way to mark the server-island as "cacheable", right?

Now compare this to the second point on an island with personalised data: none of this should be cached as it's different per user/could contain sensitive info? So in this case, I'd need to mark such islands as "non-cacheable"?

ascorbic commented 2 months ago

@Tc-001

If I was to implement it, I would go with somehow signing the props. It would be quite weird and inconvenient so I hope there is a better solution that I am missing.

I was thinking about the same thing, and do think that signing the props would be the best approach. It could be transparent to the user. The secret key could be auto-generated during build, and be cached on the server. The props would be signed with HMAC at build or SSR time, and the signature included as an attribute on the element. Next.js does something similar with its generated preview token.

ascorbic commented 2 months ago

@jkhaui It doesn't make sense to tie this to a proprietary Vercel feature, even if it open to other frameworks. Any solution should be one that users can deploy to any hosting platform. ESI is an option. The main drawback that I see on it is that there's no out-of-order rendering. The whole page blocks while the includes are loaded, so you don't get the benefit of fast loading of the shell. What could work, if you were willing to sacrifice caching, would be to implement something similar to the proposed solution (i.e. with a little bit of JS to do the replacement), but with the actual content streamed in the same response. This could be done with edge middleware.

On 2 you make a good point. I think the idea would be that the shell could be SSG or SSR, and could still be cached. In your example you could render it dynamically, but send Vary headers so that different versions could be cached according to various criteria. This Netlify post shows how to do it with their headers, but I think others have similar directives. https://www.netlify.com/blog/netlify-cache-key-variations/#language-and-country

Tc-001 commented 2 months ago

I would love if there were some options on how it is handled, something like:

Different adpaters could then take advantage of platform specific features to make the best implementation, with a fallback on a generic http-based one.

Although, from what I've heard, to make :dynamic happen it would take quite some work in core, so I understand if it is out of scope. :visible would be nice to have.

(I so, so wish there was something better than iframes for async loading html without js, would solve so many issues)

ascorbic commented 2 months ago

@Tc-001 I really like the idea of supporting server:visible. It shouldn't be too much extra lift to support either. As you suggest, I do think that :dynamic would be quite a bit more lift. Would you see it working a bit like ESI, where it would block on each island until it's loaded?

Tc-001 commented 2 months ago

@ascorbic Yes, basically that. It should also be possible to load all the dynamic islands in parallel because they are known beforehand, so it should be quite a bit faster than regular SSR that (afaik) waits on each component before continuing. But ye, that would need quite a bit more work to implement.

matthewp commented 2 months ago

What would be the advantage of the server:dynamic version of this idea aside from not requiring JavaScript? The amount of JavaScript in server:defer is going to be very minimal; if that's the only reason then I'm not sure that it's worth it.

I could see an argument that by doing it in an edge CDN you are kick-starting the request earlier than if it's delayed until the client-side JS runs.

Tc-001 commented 2 months ago

Probably nothing else really, other than it being the placeholder for a split second. But even then you can have the browser cache the island (if GET is chosen) and it would almost immediately switch to the correct one. On Tuesday, June 11th, 2024 at 22:13, Matthew Phillips @.***> wrote:

What would be the advantage of the server:dynamic version of this idea aside from not requiring JavaScript? The amount of JavaScript in server:defer is going to be very minimal; if that's the only reason then I'm not sure that it's worth it.

I could see an argument that by doing it in an edge CDN you are kick-starting the request earlier than if it's delayed until the client-side JS runs.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>

Tc-001 commented 2 months ago

Hmmm... a fun thing to add could be a way to "refresh" an island. It could be really simple (I think, I haven't really looked at vanilla events in a bit :sweat_smile: )

<!-- would is:inline be needed? Here I assume hoisted scripts wouldn't work. -->
<script is:inline>
const refreshButton = ...

refreshButton.addEventListener("click", () => {
    refreshButton.dispatchEvent(new Event("astro:island:refresh")); 
})
</script>

That re-fetches the island and replaces (or diffs) the children.


This raises a question of what happens if the island loads from a newer build than the rest of the page, as styles and resources would probably not align. For something like CF pages it could call to the actual pages.dev build URL as a fallback, but that would need to be implemented in the adapter.

ascorbic commented 2 months ago

I think the need to block on loading the islands in server:dynamic would defeat a lot of the benefits of this. I think it would only make sense if there was out-of-order rendering, which would probably need JS anyway (there are possibly ways around this with declarative shadow DOM, but these introduce problems of their own).

matthewp commented 2 months ago

@Tc-001

This raises a question of what happens if the island loads from a newer build than the rest of the page, as styles and resources would probably not align.

I was thinking about this in regards to the private key idea floated above, what happens when you redeploy and the key no longer matches?

Vercel has a feature called skew protection for this. You include a header and it makes sure the request get routed to the right version of function. @ascorbic does Netlify have any similar feature to your knowledge? We could allow some configuration for adapters for this.

ascorbic commented 2 months ago

I was thinking about this in regards to the private key idea floated above, what happens when you redeploy and the key no longer matches?

This would only happen if the deploy occurred between the time when the shell starts loading and the request is sent for the islands, so I think it's a marginal edge case. The shell would always be up to date because hosts all invalidate the cache between deploys.

Vercel has a feature called skew protection for this. You include a header and it makes sure the request get routed to the right version of function. @ascorbic does Netlify have any similar feature to your knowledge? We could allow some configuration for adapters for this.

Not at the moment, but I think it's planned. This is only an issue for Next.js because they do SPA navigation, so a tab could be sitting open for a long time and then the user tries to navigate. This would then request the new page data and it's a new deploy with a differtent deploy id in the URL.

matthewp commented 2 months ago

This would only happen if the deploy occurred between the time when the shell starts loading and the request is sent for the islands, so I think it's a marginal edge case. The shell would always be up to date because hosts all invalidate the cache between deploys.

Our most used adapter is the regular Node.js adapter. So that means people are deploying it via Docker or just manually on a VPS or something, and those types of setups are probably less likely to have synchronized static / server deployments, I imagine. Still probably an edge case, and not something we can likely help with, though.

Tc-001 commented 2 months ago

Maybe the user/adapter could optionally provide a fallback URL that is versioned to the correct deploy. Cloudflare has CF_PAGES_URL, netlify DEPLOY_URL etc. Although that would loose httponly cookies, and am unsure if there is a great way to pass them along 🤔


I do think that if the platform offers something like skew protection, there should be a system that allows an adapter to take advantage of it. Could actually be something as simple as it being able to overwrite fetch with a custom implementation that sets the necessary headers.

millette commented 2 months ago

Our most used adapter is the regular Node.js adapter. So that means people are deploying it via Docker or just manually on a VPS or something, and those types of setups are probably less likely to have synchronized static / server deployments...

@matthewp In my case, I'll often let caddy handle static files directly, and only proxy dynamic calls (SSR) to nodejs adapter. Just my 2cents.

matthewp commented 2 months ago

https://github.com/withastro/roadmap/assets/361671/08c81118-e9d4-4581-bc78-4291427b41a2

Ignore the content shift caused by my weak Tailwind skills. The top-right is the user's wishlist, cart, and avatar. Here I'm using placeholder content as the fallback.

I think you don't really want this generic placeholder as fallback though, you probably want it to look the same, but with 0 as the counts for Wishlist and Cart. And you want the avatar to be generic as the fallback and then become the user's avatar once it loads.

Do people agree with that?

What this means though, is that you really want to use the same component as both the fallback and the deferred component.

You wind up writing something like this:

<PersonalBar server:defer>
  <PersonalBar slot="fallback" placeholder />
</PersonalBar>

Which is repetitive and weird. The component itself has to be aware that some times it is loading data and some times it's not. Not sure what to do about this. Any suggestions appreciated.

Tc-001 commented 2 months ago

Hmmm... there could be a import.meta.env.SSR equivalent (Something like Astro.isStatic) that you could use for the component to act as the fallback. It would only be true if the component or one of it's parent components has server:defer set and it is currently not rendered as an island. slot=fallback could be kept if the dev wants a different component instead.

---
const cartCount = !Astro.isStatic && await db...
---
<div>
  {cartCount && <div>{cartCount}</div>}}
  <Icon />
</div>

With this aproach, if it is a diff instead of an innerhtml replacement, you could even add some animations to smoothly show the count.


But even with this, it could be quite jarring seeing the count appear/update after each navigation. Maybe there could be an option (server:immediate) where some part of the island script is inlined and uses sessionstorage to instantly show a cached version of the component, and then later revalidate.

ascorbic commented 2 months ago

Hmmm... there could be a import.meta.env.SSR equivalent (Something like Astro.isStatic) that you could use for the component to act as the fallback.

I suggest something like this in the RFC: an Astro.deferred prop. It would be nice if there was a shorthand way of saying "defer this and use it as its own placeholder too because it knows how to handle that state". It might be a footgun though.

Tc-001 commented 2 months ago

because it knows how to handle that state

I think there shouldn't be any big issues because the prerendered state would be the same a fresh user in a regular SSR app would see. Any errors would also be in build time so it should be easy for the dev to catch and add checks to any SSR-only features.

So the flag still has a use. Maybe it could be a regular Astro.ssr that is basically !isStatic. That way the dev can be sure that the component is not prerendered in any way and would still cover this use case, plus allow differentiating prerendered pages in hybrid where the same issues would arise. Are there any reasons you can think of for them to be seperate?

matthewp commented 2 months ago

Using Astro.* for this would exclude framework components from being server islands. Not sure we should do that, yet. import.meta.env.DEFERRED (bikeshed) doesn't feel right either, it would be different for each invocation of the component:

<Cart server:defer /> /* renders with import.meta.env.DEFERRED */
<Cart /> /* does not renders with import.meta.env.DEFERRED */

That would break if, for example, the component stashed the value in a variable.

matthewp commented 2 months ago

Also it's not clear to me yet how often do components want to render their own deferred content vs the caller doing so. It might be most of the time the component should do it themselves, or it could be a more rare thing.

Mortalife commented 2 months ago

One issue I have with the islands being a separate request is prop validation. As opposed to Astro.request.params I would guess that most people implicitly trust the props given to a component. If not validated, for most people it would probably be just a DOS vector at worst, but it could lead to an XSS, SQL injection or some pretty scary stuff (ex. <RenderFileAsMD path={} server:defer /> or god forbid somehow gets combined with astro-command)

Couldn't this be handled with requesting the page with an additional header or query param, the frontmatter is still ran as if the page is being rendered as a whole, but only the single component referenced is actually rendered returned in the respond similar to const partial = true?

The consequence of this of course could be if the page is large, there could be expensive calls which are re-ran for every request.

Maybe a half way house would be something like (taking inspiration from actions):

---
import { defineServerIsland } from "astro:islands";

export const prerender = true;

const post = await getPost(Astro.params.slug);

const likeIsland = defineServerIsland({
    name: 'like-island',
    values: {
        //cached values for the server island on regeneration
        post,
    },
    getProps: async ({ post }) => {
      // call a mailing service, or store to a database
      // access request specific information
      const user = await getUser(Astro.cookies.get('session'))
      const liked = await user.getLikedPost(post.id)
      return { post: post.id, liked };
    },
});
---
<Like server:defer={likeIsland}  />

In this example values can be cached server side resolving the issue of exposing values to the client.

MarkBennett commented 2 months ago

I'm new to Astro so apologies if this is previously discussed or not helpful. If a lot of components are going to have loading, error, and loaded states would it make sense to implement something like the cell pattern in Redwood.js?

https://redwoodjs.com/docs/tutorial/chapter2/cells#our-first-cell

export const QUERY = gql`
  query FindPosts {
    posts {
      id
      title
      body
      createdAt
    }
  }
`

export const Loading = () => <div>Loading...</div>

export const Empty = () => <div>No posts yet!</div>

export const Failure = ({ error }) => (
  <div>Error loading posts: {error.message}</div>
)

export const Success = ({ posts }) => {
  return posts.map((post) => (
    <article key={post.id}>
      <h2>{post.title}</h2>
      <div>{post.body}</div>
    </article>
  ))
}

This is a React component, but makes handling the different loading states simple. Perhaps adding a conditional element to slots that only displyaed content when a condition was matched? You could then show/hide slots as the loading status changed.

---
const { title } = Astro.props;
---
<div id="content-wrapper">
  <h1>{title}</h1>
  <slot if-state="loading" />LOADING<slot />
  <slot if-state="loaded" />LOADED<slot />
  <slot if-state="error" />ERROR<slot />
</div>
wassfila commented 2 months ago

Hmmm... a fun thing to add could be a way to "refresh" an island. It could be really simple (I think, I haven't really looked at vanilla events in a bit 😅 )

@Tc-001 Although this is highly important, it seems to me a bit out of scope*, because it expands the use case from "page load with fresh data" to "keep client UI in sync with the server". To keep in sync, a button is just a start, why should the user have to keep polling, so you'd want a js polling mechanism. I implemented such behavior with client island and SSE Server Sent Events, which is perfect from functional perspective, but a nightmare to integrate in various deployments and to make it scale. After all, when designing islands we should always keep fallbacks in mind. A client island has full flexibility, a server island is simply a client island that fetches ready html instead of data (close to htmx but with styles,...). Only the use case can judge when one becomes more practical/complex than the other.

*After rethinking about it, opening a refresh event that can be called by user logic, might be a good design that does not try to pack all use cases logic in the astro framework.

matthewp commented 2 months ago

@wassfila Yeah this is something I'm been trying to understand better as well. There is definitely overlap. I see the biggest advantages of server islands being:

I think in some scenarios you will want to use both server and client islands together. There's nothing stopping you from having a client:* directive on a component inside of a server:defer island. That gives you the nice sort of server rendered and client enhanced experience you want.

It might also be possible to have client and server directives on the same component, but that's not something I've tackled yet.

wassfila commented 2 months ago

I think in some scenarios you will want to use both server and client islands together. There's nothing stopping you from having a client:* directive on a component inside of a server:defer island. That gives you the nice sort of server rendered and client enhanced experience you want.

Right, I did not think of that. It's true that something like github logo won't change instantly, so there's room for both.

It might also be possible to have client and server directives on the same component, but that's not something I've tackled yet.

I see, wow, lots of perspectives,... thanks for the answer.

adrianmroz commented 2 months ago

There's nothing stopping you from having a client:* directive on a component inside of a server:defer island.

At first I was mildly intrigued by this RFC. But with that in mind? I love it. Having two levers to defer - data and interactivity is so cool.

Tc-001 commented 2 months ago

I assume this would work with prerender = true right?

With prerender, client: and server: together, would just be client: with extra steps.
Frontmatter re-running sounds like a recipe for edge cases, so the simplest solution could be the compiler warning you if you do that, or ignore server: and just use static props.

Mortalife commented 2 months ago

Frontmatter re-running sounds like a recipe for edge cases, so the simplest solution could be the compiler warning you if you do that, or ignore server: and just use static props.

Without any level of frontmatter and using static props only it would be exactly the same as just rendering the component directly on the server. You'd gain nothing.

For example, the demo was to load in the users basket and account via a secondary request, that implies running of some request specific code to identify the user and retrieve their basket. I think the questions posed is how do you differentiate between a server render and a render on request inside the deferred component.

sasoria commented 2 months ago

I've been waiting for something like this! I'm hoping there's support for framework components in Server Islands.

misl-smlz commented 2 months ago

Great to see this RFC. We are currently working on re-implementing a large B2B company and e-commerce platform with Astro and would benefit greatly from server islands from my perspective.

Currently we use client:only Vue components for all login specific components that request their data in a fetch call from the server (the rest of the page is cached by the CDN): User button, shopping cart or even the price display (there are customized prices). This could be replaced with server islands.

But from my point of view one more thing would be important: How can server islands be cached locally so that there is no flickering when switching pages?

What I mean by flickering: Every time you switch to a new page, the user button is loaded as a server island, the fallback icon appears first and when the backend component is there, the correct icon is displayed. If you can use a cached state instead of the fallback icon, there is no flickering until the server island is loaded.

In a client:only component, this can be solved using libraries such as https://github.com/vercel/swr or https://github.com/Kong/swrv:

The name “SWR” is derived from stale-while-revalidate, a cache invalidation strategy popularized by HTTP RFC 5861. SWR first returns the data from cache (stale), then sends the request (revalidate), and finally comes with the up-to-date data again.

matthewp commented 2 months ago

Even with SWR there's going to be a flicker, the fallback is going to be visible while the fetch is happening. Even with aggressive caching there's still the time the request takes to get to the CDN and back, then update the DOM. I don't know if there's anything we can do about that. Using ViewTransitions is maybe one solution you can take.

Tc-001 commented 2 months ago

🤔 You could use an inline script and session storage, as long as the swap happens before the page loads there shouldn't be a flash.

misl-smlz commented 2 months ago

@matthewp The suggestion from @Tc-001 works from my point of view (and was also the idea in which direction I would go). In my opinion, it would make sense to incorporate something like this directly as a feature.

jamesli2021 commented 2 months ago

Inline script will need to take note on Content Security Policy (CSP), whether we need to add "inline script integrit".

matthewp commented 2 months ago

There's now a preview release available to try: https://github.com/withastro/astro/pull/11305

Note that it only works in dev mode. Please leave any feedback here and not in the PR. Thank you!

matthewp commented 2 months ago

Stage 3 RFC first draft is up: https://github.com/withastro/roadmap/pull/963

ematipico commented 2 months ago

Closing. https://github.com/withastro/roadmap/pull/963

Please use the PR to further continue possible discussions.