facebook / docusaurus

Easy to maintain open source documentation websites.
https://docusaurus.io
MIT License
55.39k stars 8.31k forks source link

Performance - Reduce build time and memory usage - use alternative webpack JS loaders #4765

Open slorber opened 3 years ago

slorber commented 3 years ago

๐Ÿ’ฅ Proposal

With Webpack 5 support, re-build times are now faster.

But we still need to improve the time for the first build which is not so good currently.

Some tools to explore:

It will be hard to decouple Docusaurus totally from Webpack at this point.

But we should at least provide a way for users to use an alternative (non-Babel) JS loader that could be faster and good enough. Docusaurus core should be able to provide a few alternate loaders that would work by default using the theme classic, by just switching a config flag.

If successful and faster, we could make one of those alternate loader the default loader for new sites (when no custom babel config is found in the project).

Existing PR by @SamChou19815 for esbuild: https://github.com/facebook/docusaurus/pull/4532

slorber commented 3 years ago

For anyone interested, we added the ability to customize the jsLoader here https://github.com/facebook/docusaurus/pull/4766

This gives the opportunity to replace babel by esbuild, and you can add this in your config:

  webpack: {
    jsLoader: (isServer) => ({
      loader: require.resolve('esbuild-loader'),
      options: {
        loader: 'tsx',
        format: isServer ? 'cjs' : undefined,
        target: isServer ? 'node12' : 'es2017',
      },
    }),
  },

We don't document it yet (apart here). We may recommend it later for larger sites if it proves to be successful according to feedback from early adopters, so please let us know if that works for your use-case.

Important notes:

adventure-yunfei commented 3 years ago

came from https://github.com/facebook/docusaurus/issues/4785#issuecomment-860705444.

Just wondering, is this issue aiming to reduce build time for entire site generator (including md/mdx parser, or Docs), or just jsx react pages?

slorber commented 3 years ago

@adventure-yunfei md docs are compiled to React components with MDX, and the alternative js loader like esbuild also process the output of the MDX compiler, so this applies to documentation as well. Check the mdx playground to test the mdx compiler: https://mdxjs.com/playground/

If you have 10k docs you basically need to transpile 10k react components

adventure-yunfei commented 3 years ago

@slorber perfect! we're also trying to use esbuild to boost build time (for application project). I'll have a try on this.

BTW I've created a similar large doc site here from our internal project.

Update: Tested with a higher perf PC:

alphaleonis commented 3 years ago

This gave a nice performance boost, although I think there are still more to be desired. Out of curiosity, what is actually going on that is taking so much time behind the scenes? In our case (a site with around 2,000 .md(x) files) most of the time spent seems to be before and after the "Compiling Client/Compiling Server" progress bars appear and complete.

As it stands, building the site takes around 20 minutes with esbuild, and was closer to 40 minutes before. Then out of curiosity, I just tested to add four versions to our site, and building it. Before using esbuild, the process took just shy of 13 hours(!). Using esbuild it was down to just shy of 8 hours. (Still way too long to be acceptable). So while it was a big improvement, it still seems to be very slow.

In the second case, it reported:

[success] [webpackbar] Client: Compiled successfully in 1.33h
[success] [webpackbar] Server: Compiled successfully in 1.36h

What was going on for the remaining 5 hours? Is this normal behavior, or did we configure something incredibly wrong? And why does it take much longer than four times the amount of time with four versions added?

slorber commented 3 years ago

@alphaleonis it's hard to say without further analysis, but the MDX compiler is transforming each md doc to a React component, that is later processed by babel (or esbuild).

The MDX compiler might be a bottleneck, this is why I'd like to provide an alternate MD parser for cases where MDX is not really needed.

Webpack might also be a bottleneck.

Using esbuild is not enough. Also when using esbuild as a webpack loader, we are not really leveraging the full speed benefits of esbuild. Unfortunately we can't replace replace Webpack by esbuild easily, it's part of our plugin lifecycle API and Webpack is more featured than esbuild (we use various things like file-loader, svgr loader...)

What was going on for the remaining 5 hours? Is this normal behavior, or did we configure something incredibly wrong?

We have enabled Webpack 5 has persistent caching, and rebuild times are much faster. You need to persist node_modules/.cache across build to leverage it.

And why does it take much longer than four times the amount of time with four versions added?

It's hard to tell without measuring on your system. Your system may have not enough memory for Webpack to do its job efficiently, leading to more garbage collection or whatever.

alphaleonis commented 3 years ago

@slorber Thanks for the explanation. We did try the persistent caching, and it seems to help a lot with the time spent during the "Build server/client" phase (which I assume is Webpack). The machine in question had 16GB memory, and the same was specified as max_old_space_size.

Is there any way we can do some further analysis, such as enabling some verbose logging to get some more details perhaps? Or is this kind of the expected build time for sites of that size? (If so I guess we will have to find another solution for versioning, such as building/deploying each version separately.

johnnyreilly commented 3 years ago

Also when using esbuild as a webpack loader, we are not really leveraging the full speed benefits of esbuild

This is true - but there's still a speed benefit to take advantage of. It's also pretty plug and play to make use of. See my post here:

https://blog.logrocket.com/webpack-or-esbuild-why-not-both/

slorber commented 3 years ago

Is there any way we can do some further analysis, such as enabling some verbose logging to get some more details perhaps?

This is a Webpack-based app, and the plugin system enables you to tweak the Webpack config to your needs (configureWebpack lifecycle) and add logs or whatever you want that can help troubleshoot the system. You can also modify your local docusaurus and add tracing code if you need.

I'm not an expert in Webpack performance debugging so I can't help you much on how to configure webpack and what to measure exactly, you'll have to figure out yourself for now.

Or is this kind of the expected build time for sites of that size?

It's hard to have meaningful benchmarks. Number of docs is one factor but also the size of docs obviously matter so one site is not strictly comparable to another. 40min build time for 2000 mdx docs with babel seems expected when comparing with other sites. Obviously it's too much and we should aim to reduce that build time, but it's probably not an easy thing to do.

(If so I guess we will have to find another solution for versioning, such as building/deploying each version separately.

For large sites, it's definitively the way to go, and is something I'd like to document/encourage more in the future. It's only useful to keep multiple versions in master when you actively update them. Once a version becomes unmaintained, you should rather move it to a branch and create a standalone immutable deployment for it, so that your build time does not increase as time passes and your version number increase.

We have made it possible to include "after items" in the version dropdown, so that you can include external links to older versions, and we use it on the Docusaurus site itself:

image

I also want to have a "docusaurus archive" command to support this workflow better, giving the ability to publish a standalone version of an existing site and then remove that version.

adventure-yunfei commented 3 years ago

Tested with a higher perf PC:

  • with Docusaurus 2.0.0-beta.0, doc site generation finished in 63min
  • with latest in-dev version, doc site generation finished in 30min. Reduced 50% time. ๐Ÿ‘

Saddly the process costs a very large memory. My local testing environment has 32G memory, but in CICD environment memory limit is 20G. The process is killed cause of OOM, during emitting phase. From the monitor, the memory suddenly increased from 8G to 20G+.

slorber commented 3 years ago

It is unexpected that beta.2 is faster than beta.0, maybe you didn't clear your cache?

The process is killed cause of OOM, during emitting phase. From the monitor, the memory suddenly increased from 8G to 20G+.

What do you mean by the "emitting phase"? I didn't take much time to investigate all this so any info can be useful.

adventure-yunfei commented 3 years ago

It is unexpected that beta.2 is faster than beta.0, maybe you didn't clear your cache?

I'm using the esbuild-loader config from the docusaurus website example. So it should be esbuild making build faster.

What do you mean by the "emitting phase"? I didn't take much time to investigate all this so any info can be useful.

This may not be accurate. The process memory was 7G most times. About 20 minutes later memory jumped to 20.2G while the console showing Client "emitting". After client build finished, the memory dropped down to 7G. (The Server was still building)

krillboi commented 3 years ago

Trying to test esbuild-loader but running into some trouble.

I have added the following to the top level of my docusaurus.config.js file:

  webpack: {
    jsLoader: (isServer) => ({
      loader: require.resolve('esbuild-loader'),
      options: {
        loader: 'tsx',
        format: isServer ? 'cjs' : undefined,
        target: isServer ? 'node12' : 'es2017',
      },
    }),
  },

I have added the following to my dependencies in package.json:

    "esbuild-loader": "2.13.1",

The install of esbuild-loader fails. Am I missing more dependencies for this to work? Might also be a Windows problem, unsure right now.

krillboi commented 3 years ago

Seems like it was one of the good ol' corporate proxy issues giving me the install troubles..

I'll try and test the esbuild-loader to see how much faster it is for me.

krillboi commented 3 years ago

Tested yesterday with production build, took about 3 hours compared to 6 hours before (~400 docs x 5 versions x 4 languages).

So about half the time with the esbuild-loader which is nice. But we are reaching a size of docs where I am now looking into archiving older versions as seen on the Docusaurus site.

This may not be accurate. The process memory was 7G most times. About 20 minutes later memory jumped to 20.2G while the console showing Client "emitting". After client build finished, the memory dropped down to 7G. (The Server was still building)

I witnessed the same thing where the memory usage would suddenly spike up to take 25+ gb.

slorber commented 3 years ago

Thanks for highlighting that, we'll try to figure out why it takes so much memory suddenly

slorber commented 3 years ago

Not 100% related but I expect this PR to improve perf (smaller output) and reduce build time for sites with very large sidebars: https://github.com/facebook/docusaurus/pull/5136 (can't really tell by how much though, it's site specific so please let me know if you see a significant improvement)

adventure-yunfei commented 3 years ago

Not 100% related but I expect this PR to improve perf (smaller output) and reduce build time for sites with very large sidebars: #5136 (can't really tell by how much though, it's site specific so please let me know if you see a significant improvement)

Tested my application with latest dev version.

Seems not working for my case.

Update:

adventure-yunfei commented 3 years ago

This may not be accurate. The process memory was 7G most times. About 20 minutes later memory jumped to 20.2G while the console showing Client "emitting". After client build finished, the memory dropped down to 7G. (The Server was still building)

I've made another test, using plugin to override .md loader with noop:

// inside docusaurus.config.js
{
  // ...
  plugins: [
    function myPlugin() {
      return {
        configureWebpack() {
          return {
            module: {
              rules: [
                {
                  test: /\.mdx?$/,
                  include: /.*/,
                  use: {
                    loader: require('path').resolve(__dirname, './scripts/my-md-loader.js')
                  }
                }
              ]
            }
          }
        }
      };
    }
  ],
}
// scripts/my-md-loader.js
module.exports = function myPlugin() {
    const callback = this.async();
    return callback && callback(null, 'empty...');
};

And then run doc builder again.

So I'm afraid it's the code of page wrapper (e.g. top bar, side navigation, ...) that causes the max memory usage. Switching mdx-loader to another one may won't help.

slorber commented 3 years ago

@adventure-yunfei it's not clear to me how to do those measures, can you explain?

If you allow Docusaurus to take up to 20go, it may end up taking 20go. And it may take more if you give it more. The question is, how much can you reduce the max_old_space_size nodejs setting until it starts crashing due to OOM.

So I'm afraid it's the code of page wrapper (e.g. top bar, side navigation, ...) that causes the max memory usage. Switching mdx-loader to another one may won't help.

Proving a memory issue is not the mdx-loader does not mean it's the "page wrapper". There is much more involved than the React server-side rendering here.

I suspect there are optimizations that can be done in this webpack plugin's fork we use: https://github.com/slorber/static-site-generator-webpack-plugin/blob/master/index.js

Gatsby used it initially and replaced it with some task queueing system.

adventure-yunfei commented 3 years ago

Proving a memory issue is not the mdx-loader does not mean it's the "page wrapper". There is much more involved than the React server-side rendering here.

That's true. By saying "page wrapper" I mean any other code outside the md page content itself. Just trying to provide more perf information to help identify the problem.

More info:

hjiog commented 2 years ago

When the number of documents is large,run yarn start is still slow๏ผŒdo you have a plan to support vite?

Josh-Cena commented 2 years ago

The bundler is a bit hard to swap out. Next.js afaik is going through the same struggle, but basically the entire core infra is coupled with Webpack, so the most we can do is using different JS loaders (esbuild vs Babel) rather than letting you use an entirely different bundler. If you have the energy... you can try forking Docusaurus and re-implementing core with Vite.

slorber commented 2 years ago

There is some interest in making Docusaurus bundler & framework-agnostic in the future through an adapter layer but it's likely to be complex to implement in practice, and our current plugin ecosystem is also relying to Webpack so it would be a disruptive breaking change for the community.

Josh-Cena commented 2 years ago

Makes me wonder if it's possible to swap out Webpack in our core entirely๐ŸšŽ As Docusaurus 3.0, rebuilt with Vite/Parcel/...

armano2 commented 2 years ago

@slorber i did some big refactoring, small optimization and removed bunch of dependencies oo static-site-generator-webpack-plugin - https://github.com/slorber/static-site-generator-webpack-plugin/pull/2 and https://github.com/slorber/static-site-generator-webpack-plugin/pull/1

i trimmed down package but there is still bunch of improvements to be done there


we should generally avoid using as this is "extremely" slow

const webpackStatsJson = webpackStats.toJson({ all: false, assets: true }, true);

https://github.com/webpack/webpack/issues/12102 https://github.com/webpack/webpack/issues/6083


next potentially slow / high resource consuming is package eval, this code spawns vm for each entry point and evaluates its code

adventure-yunfei commented 2 years ago

I've found and fixed the large maximum memory issue in https://github.com/slorber/static-site-generator-webpack-plugin/pull/3.

Investigation

After investigation, the max memory happened during static-site-generator-webpack-plugin, rendering for every page path. So I took a look at static-site-generator-webpack-plugin code, and found two problems:

  1. memory/gc issue. All pages are rendered during the same time (see code). The render is async, and its allocated resource cannot be freed util the render promise finished. Thus in the worst case, the maximum allocated memory should be sum of resources for rendering every page, that is O(M*N) memory which M for page count and N for allocated memory size for rendering one page. That's not necessary.
  2. duplicate rendering. After rendering one page, it'll crawl relative paths and continue render that relative path/page (see code). That may lead to many duplicate page renderings.

To fix them:

Check the optimization result below

Before optimization

The maximum allocated memory is 21+G, increased quickly during static-site-generator-webpack-plugin renderPaths, and then dropped down quickly (from 11:07 to 11:21).

After optimization

The maximum allocated memory is 7.1G during static-site-generator-webpack-plugin renderPaths (from 16:25 to 16:40), without large memory allocated. (and maximum 8.2G for the whole build, happens during docusaurus core handleBrokenLinks)).

The maximum memory decreased, while the build time remained the same.

Further information

  1. Build time summary, total 35min:
    • webpack compile: 27min
      • static-site-generator-webpack-plugin renderPaths: 15min
    • handleBrokenLinks: 8min
  2. The render page function is defined in docusaurus core serverEntry.ts. After checking the code:
    • I guess the large memory allocating comes from minifier. I've seen minifier consuming large memory before.
    • manifest is read & parsed multiple times (see code). We can optimize it to only read once. In my case, manifest json is 2MB, reading & parsing for 7722 times costs 2min.
  3. In my case only large memory allocating issue is validated (@krillboi please help to validate in your case). There's no relative path in my case, so perf result for "avoid duplicate page rendering" is not validated. @alphaleonis you may test it in your case. From your description I suspect your non-linear build time increasing is caused by this code.
RDIL commented 2 years ago

Tip: one of the best ways to reduce build time and memory is to use esbuild-loader instead of babel-loader. See the website in this repo's config for the setup and use.

slorber commented 2 years ago

Thanks for working on this, will read all that more carefully and review PRs soon.

FYI afaik Gatsby also moved to a queuing system a while ago and that was something I wanted to explore. It's worth comparing our code to theirs.


Something I discovered recently: JS can communicate more seamlessly with Rust thanks to napi_rs with some shared memory, while it's more complicated in Go.

image

https://twitter.com/sebastienlorber/status/1460624240579915785 https://twitter.com/sebastienlorber/status/1468522862990536709

It's really worth trying to use SWC instead of esbuild with the Babel loader for that reason. I believe it may be faster than esbuild when used as a loader, while esbuild may be faster when you go all-in and stop using Webpack/loaders.

Next.js has great results with SWC, and we may eventually be able to later leverage their Rust extensions to support things like Styled-Components/Emotion. https://nextjs.org/blog/next-12#faster-builds-and-fast-refresh-with-rust-compiler

If someone wants to make a POC PR on our own website and compare build times with cold caches, that could be interesting

Josh-Cena commented 2 years ago

@slorber I have a question that I'm unable to figure out: if our site is using esbuild, why is there still a Babel message in the command line saying that the changelog has exceeded 500KB?

RDIL commented 2 years ago

Personally, after using SWC and ESBuild for a while, I honestly prefer ESBuild. SWC is not documented nearly as much, and ESBuild has very frequent releases fixing bugs and adding features. ESBuild has a nicer DX IMO.

alexander-akait commented 2 years ago

swc and webpack are planned smoothly integration so if you want to avoid big changes/refactor/unpredictable bugs, prefer to use swc (part of integration - https://github.com/swc-project/swc/tree/main/crates/swc_webpack_ast), also swc has better perf in some cases, but they both are fast.

swc is parser/codegen/visitor/bundler/transpiler/etc stuff, it is more just bundler, these are slightly different things, so if in the future you want deeper native integration, especial based on rust, I recommend to use swc. This is not an advertisement, just notes for other developers.

RDIL commented 2 years ago

ESBuild and swc's performance difference should be very tiny, given how, especially compared to JS tools, they are both much faster. I don't really think its worth comparing the 2 for performance, since both are clearly very fast.

If one provides a better experience than the other, is it really worth benchmarking them on a base of like half a second?

alexander-akait commented 2 years ago

Please read:

also swc has better perf in some cases, but they both are fast.

I am more about - swc provides more things out of box, so if you need custom plugin/transformer/code generator for js/css/and more things, I strongly recommend to use swc, bundling is not only one thing in build pipeline

slorber commented 2 years ago

Thanks @alexander-akait, we'll try to keep up with the work Vercel and Webpack are doing and see what we can reuse here.

SWC is more extensible and we may even implement some Rust plugin someday to process our i18n API and replace the global registry of messages with some inline localized translation strings directly in the app bundle. (ie, better code splitting for translations).

And it should also make it easier to use Emotion/StyledComponents with Docusaurus, as Vercel is already working on porting existing Babel plugins to Rust.

@slorber I have a question that I'm unable to figure out: if our site is using esbuild, why is there still a Babel message in the command line saying that the changelog has exceeded 500KB?

๐Ÿ˜… good question, maybe it's related to the translation extraction? But afaik it's not run when starting the dev server... weird

Josh-Cena commented 2 years ago

A quick guess is that MDX v1 uses Babel under the hood to do the transformation: https://github.com/mdx-js/mdx/blob/master/packages/mdx/mdx-hast-to-jsx.js#L1

It seems MDX v2 has removed this dependency

yangshun commented 2 years ago

I've been using Next.js a fair bit this year and honestly if I could turn back time, I think @endiliey and I shouldn't have built our own site generator in Docusaurus v2 and we should have used Next.js instead. Admittedly, it was a mix of not-invented-here syndrome and wanting to learn how to build a site generator from scratch.

At this point, Next.js is a clear winner in the SSG race and Gatsby is more or less out. Vercel is doing so well with their latest funding rounds and star hires, I think it's safe to bet on Next.js.

Docusaurus v2 is split into 3 layers: our homegrown (1) SSG infra, (2) plugins, (3) UI/themes. If I were to build Docusaurus v3, I would make it such that Docusaurus 3 is more like Nextra, swap out (1) with Next.js and retain (2) and (3). Docusaurus 3 would provide all the documentation-related features. I felt that Docusaurus 2 had to play catch up a lot and implement lots of non-documentation-specific features that were required by websites when Next.js already provided all these. We could have saved lots of time by standing on the shoulders of giants.

With Next.js' current popularity and trajectory, I think it's only a matter of time before someone builds a fully-fledged docs theme on top of Next.js that does everything that Docusaurus does, but probably better because their SSG infra is much more optimized by virtue of being on Next.js. IMO many users would also like to have the SSR features Next.js provides so that they can build auth have better integration with their core product.

Josh-Cena commented 2 years ago

I still like the idea of having "dependency independence". Apart from Webpack / React router / other low-level infra, we aren't coupled to any dependency. It means we can describe our architecture as an integral thing without saying "the peripheral is Docusaurus, but the core, well, is Next.js and it's a black box". Working on Docusaurus frankly made me a lot more familiar with how SSG works๐Ÿ˜„

slorber commented 2 years ago

@yangshun @Josh-Cena we seem to all agree on this: the value proposition of Docusaurus is all about the plugins and opinionated docs features to get started very fast and still have great flexibility.

That was also my opinion on day one, but also think that having our own SSG wasn't totally useless: it permitted us to iterate faster without being blocked by limits of an existing dependency and gave us time to evaluate better Gatsby vs Next.js vs others (the choice wasn't so clear in 2019 ๐Ÿ˜… and Remix remains a new interesting option today)

We discussed this with @zpao and @JoelMarcey a few months ago and we agreed that Docusaurus should rather migrate to Next.js.

Or become framework-agnostic. This might be more complicated to document well, and harder to implement, but could allow using other solutions like Remix or Gatsby.

And building on top of Next.js also incentives Vercel to invest more in Docusaurus ๐Ÿคทโ€โ™‚๏ธ eventually we could join forces with Nextra if companies can agree on that


Now I don't think it is going to be in 3.0, because 3.0 will likely be quite soon if we start to respect Semver more strictly (see https://github.com/facebook/docusaurus/issues/6113)

Josh-Cena commented 2 years ago

One thing I regret about migrating to Next.js is we will be forever tied to Webpack because from my observation the Webpack 5 migration for them was more painful than for us. Webpack ultimately is not comparable in terms of performance to, say, esbuild... ๐Ÿค”

alexander-akait commented 2 years ago

Next.js starts to migrate on swc (rust), and replace webpack more and more, so you should not afraid it, as I written above it will be smoothly migration

slorber commented 2 years ago

One thing I regret about migrating to Next.js is we will be forever tied to Webpack because from my observation the Webpack 5 migration for them was more painful than for us. Webpack ultimately is not comparable in terms of performance to, say, esbuild... ๐Ÿค”

I agree that we want something fast but I believe it's also the goal of Next.js ๐Ÿ˜…

Their Webpack 5 migration is likely more complex because of the higher diversity of sites needing to migrate, compared to our low diversity: most doc sites are not customized that much and plugins don't always tweak Webpack settings.

Also, there's value in keeping at least some things in Webpack for now: our plugin ecosystem can remain retro-compatible

Josh-Cena commented 2 years ago

Yeah, in the short term migrating to Next.js is surely going to yield lots of benefits. I'm never actually used it purely as an SSG but more as a React framework, but if we can figure out how to make them interoperate it will be very nice!

yangshun commented 2 years ago

the choice wasn't so clear in 2019

Yep totally true. Back then I referenced how Gatsby did lots of things and I might have just chosen Gatsby to build on top of actually.

Webpack ultimately is not comparable in terms of performance to, say, esbuild

The thing is, with Next.js' backing, they will just use the fastest that's out there and we can benefit from it by building on top of Next.js. I believe Sebastien is also saying the same. Hopefully we can go with Next.js in the next version (or even better if can be framework agnostic)!

gabrielcsapo commented 2 years ago

As an outsider we looked at all the options and docusaurus was the best when it came to level of investment and clean and clear plugin and theming architecture. I think competition in this space is much needed and I think having nextra and docusaurus is great for pushing the envelope.

I think the alternatives to webpack aren't somewhere stable enough to really compare apples to apples, I think by the next major version I think the landscape is going to look very different or maybe webpack migrates to rust and no one needs to do any major rearchitecting at all.

Josh-Cena commented 2 years ago

Allowing alternative JS loaders may hinder the provision of useful OOTB JS syntax extensions. For example, @slorber mentioned somewhere that we may explore making <Translate> a zero-runtime API through Babel transformations. I also talked to him about solving #4530 through a runtime transformation of '@theme-original/*' to the actual path of "next component in the theme component stack", instead of using static Webpack aliases. I would definitely want to use SWC/esbuild, but in any case, it would mean writing the transform plugin with a different set of APIs, maybe even a different language. That makes it not scalable. If we have to insert an extra loader that uses Babel, then we are back in square one and perf will be compromised.

RDIL commented 2 years ago

Perf is arguably already compromised by using Babel in the first place.

Josh-Cena commented 2 years ago

Perf is arguably already compromised by using Babel in the first place.

Yes, that's the whole point here. We either want to use Babel throughout, or drop Babel altogether. We don't want to do custom code transformations (the two I mentioned) through Babel, then delegate the rest of transpilation to another JS loader. But it would not be scalable if we support multiple JS loaders, especially if it's bring-your-own-parser.

RDIL commented 2 years ago

I think we should drop Babel personally.

Pros:

Cons:

Josh-Cena commented 2 years ago

Babel is still useful as a default, because the babel.config.js is documented as public API and users who want more JS syntax would often it easier to search for a Babel plugin. We can certainly promote SWC/esbuild by providing OOTB configurations, but ultimately it means we need to support multiple JS loaders.

Transforms may need to be ported to Rust, not 100% sure on that one though.

If you are talking about SWCโ€”the current plugin system is still JS-based. https://swc.rs/docs/usage/plugins

Would need to upgrade to MDX 2?

Yeah, not a huge deal though, I think MDX 1 only does limited transformation with Babel