Closed filipesilva closed 4 years ago
We have discussion about this for webpack@5
Minify modules has flaws:
I think here only one solution - reduce memory usage on terser side or increase memory for node
It's interesting to hear this has already been discussed!
What do you mean with inefficient compression though? I feel very confident that compression modules would be the same as compressing the chunk (if we ignore the module wrapper code).
I definitely see the part about still needing to minimize the chunk for the non-module code. Maybe Terser could be configured to perform a more lightweight optimization there. But I haven't really thought about it much.
That a plugin can alter things after optimizeModules
is a problem. But a plugin can also alter things after optimizeChunkAssets
which is when this plugin runs. optimizeChunkModules
is after those two though.
What are the cases where you saw overload btw? I haven't really considered those yet.
@filipesilva
What do you mean with inefficient compression though? I feel very confident that compression modules would be the same as compressing the chunk (if we ignore the module wrapper code).
non module webpack code, also it is bad practice uglify whole file (other bundlers do same), i can provide examples
also don't forget what some plugins can emit js files and we compress them (we don't parser them only compress), so implementation on module level increase memory cpu/memory usage
I definitely see the part about still needing to minimize the chunk for the non-module code. Maybe Terser could be configured to perform a more lightweight optimization there. But I haven't really thought about it much.
It is still required to send whole js file to terser so it is increase memory usage
Also parallel option icrease overload on cpu and in theory increase compilation time.
As i said above here doesn't exists gold solution:
You can write a terser-loader to archive this. The optimization is worse but if you don't care you can boost performance a lot (especially with module level caching: watch, cache-loader or persistent caching in webpack 5).
You will end up with some variables not mangled: __webpack_require__
, module
, exports
. This can be quite expensive.
You can try to invoke the terser-webpack-plugin in addition to this with compress
option disabled. Not sure if it's worth it. (probably not)
Also possible splitChunks.maxSize
to split chunks into smaller assets.
The real benefit from terser is only manifest after modules are concatenated, so I don't think a loader would do much good. Using terser on a loader would greatly reduce the optimizations that Terser is able to do, whereas using it as a plugin that optimizes modules after they are concatenated would retain the same optimizations for those modules.
Terser runs fastest when the input code is smaller!
@sokra it's possible to pre-warm the terser mangle cache with any global webpack variables you need to use from within, by using the mangle cache options. This will enable you to mangle __webpack_require__
as a
or anything you want.
@fabiosantoscode can you provide example?
@evilebottnawi sure!
const terser = require("terser")
const { code } = terser.minify(`
const foo = __webpack_require__("foo").default;
`, {
module: true,
mangle: {
cache: {
props: new Map([
["__webpack_require__", "a"],
["__other_stuff__", "b"],
])
}
}
})
console.log(code) // -> a("foo").default
@fabiosantoscode how it is decrease minification time and how less?
In webpack's case you'll want to create a copy of this Map for each module. Terser mutates it. If you don't want to change the user-provided options.mangle
there's a nameCache
top-level property which can be given to Terser so that it can modify the option itself. I recommend doing this.
I don't have any benchmarks on hand, but I've read this fact (that minification time is reduced) several times before.
There's also a nice advantage which is, if a user has an issue with Terser they can reproduce it more easily since chunk generation is out of the picture.
Here's what the options look like with nameCache
:
{
module: true,
nameCache: {
vars: {
props: new Map([
["__webpack_require__", "a"],
["__other_stuff__", "b"],
])
}
},
mangle: true
}
we need investigate this, maybe benchmark will be great
Probably something can be whipped up quickly with the lodash repository and a bash script or something. I have my hands full atm since there's a few bugs popping up on Terser.
@fabiosantoscode no need rush, just todo :+1:
I have a need of generating a terser-loader that minifies before bundling. The use case is that I'd like to design custom rules for minifing/managling each module instead of applying global mangling rules on the entire bundle. Also there's some Webpack transformations that happen during bundling as well that interfere with contextual mangling rules that I'd like to sidestep by minifing before bundling. The name cache will be of course important in this process so modules can refer to the correctly mangled names for consistency across modules. However some mangling rules should be internalized inside modules verses applied globally, so this loader approach would help. I don't believe there's a Terser loader project, is this something that would help address this issue? I will look into if it if others agree. Other suggestions are welcome.
@J-Rojas it is ineffective compression and out of scope this plugin, also other plugins (include plugins in webpack) can emit new js assets and you don't compress them. Using loader can't solve problem with memory and cpu usage
@evilebottnawi you are right that the loader is out of scope. I've begun a new project repo for this effort. So far it has addressed my use case, and I don't agree about the insufficient compression. It seems to be very much on par with minifying as a whole.
I don't agree about the insufficient compression
Seriously? Webpack provide own boilerplate code and you can't optimize this code using loader. Also as i said above other plugins can emit JS asset too, so they will be unuglified too. You don't win memory and CPU load - only ineffective uglified code. We development webpack a lot of time and have tried all approaches.
No need to become defensive. I don't know anything about the memory and cpu load issues as I do not have a problem with this. I'm addressing my use case and the emperical evidence with my approach using a name cache across minified loaded files shows a similar code size. I'm minifying across at least 150 files. Regardless I will continue with my approach to address my use case. Thanks for the input.
I'll leave this here for anyone interested: https://github.com/J-Rojas/terser-loader
@evilebottnawi I was able to use terser-loader and webpack-terser-plugin in a 2 phase approach to satisfy my requirements (additional property mangling with per module rules) and also achieve superior compression size. Using this approach with a project that utilizes over 400 modules, I was able to get an additional 22% compression prior to gzip, and 10.5% after gzip. So I'd say skillful use of terser-loader can achieve as good if not better compression. I'll probably do a write up about this eventually when I have more time.
@J-Rojas you can compress multiple times using multiple plugins, but it is very bad for performance, also sometimes it can create bugs due bugs on terser side (but very rare)
@evilebottnawi thanks for pointing out that Terser bugs are very rare, it's very nice of you <3
Is it possible to mark a part of the code as preminimized and let terser skip it/emit it unmodified?
Maybe with a comment like /*#__COMPRESSED__*/function(){...}
. For me it would be fine to allow it only in front of functions. This would make scope analysis easier as no variable declaration can leave the function (you could skip parsing these at all)
@sokra those changes would have to go inside the Terser module, since it would parse the code while looking for these tokens. It should be possible to do this, but it's outside of the scope of this repository. However it would be preferable to control which modules are minified via configuration instead of having to modify the code itself. If you are bundling vendor code together, it would create maintenance problems to have to modify this code if it requires specialized minification. Hence the motivation for the loader approach.
I think I see what @sokra is thinking about. Let me give a concrete example.
Imagine you have these files:
// index.js
import './static';
import './bailout';
import('./lazy');
export const content = 'index.js content';
console.log(content);
// static.js
export const content = 'static.js content'
console.log(content);
// commonjs.js
const content = 'commonjs.js content'
console.log(content);
module.exports = content;
// lazy.js
export const content = 'lazy.js content'
console.log(content);
When these files are bundled with module concatenation turned on, you end up with three modules:
index.js
and static.js
commonjs.js
lazy.js
common.js
and lazy.js
could not be concatenated because they suffer from the "Non ES6 Module" and "Imported With import()" bailouts described in the docs.
But even though you have 3 modules, you only have two chunks. Only the module containing lazy.js
will be in a separate chunk, and the other chunk contains the other two modules.
The really important thing for Terser to minify are these 3 modules. Terser isn't aware of any kind of module loading so it will always process these as isolated. But because terser-webpack-plugin
operates at the chunk level (because of the reasons @evilebottnawi mentioned), terser
will have to parse the chunk containing the two modules as if it was a single module. This leads to higher memory and CPU usage than if Terser could process them one at a time.
If you look at the webpack output, it looks like this:
0.js
(window["webpackJsonp"] = window["webpackJsonp"] || []).push([[0],{
// "./src/lazy.js": /!*****!*\ !** ./src/lazy.js ! *****/ /! exports provided: content / /! ModuleConcatenation bailout: Module is referenced from these modules with unsupported syntax: ./src/index.js (referenced with import()) / /***/ (function(module, webpack_exports, webpack_require) {
"use strict"; webpack_require.r(webpack_exports); / harmony export (binding) / webpack_require.d(webpack_exports, "content", function() { return content; }); const content = 'lazy.js content' console.log(content);
/***/ })
}]);
- `main.js`
/**/ (function(modules) { // webpackBootstrap // ~200 lines of webpack code here /**/ }) /****/ /**/ ({
// "./src/commonjs.js": /!*****!*\ !** ./src/commonjs.js ! *****/ /! no static exports found / /! ModuleConcatenation bailout: Module is not an ECMAScript module / /***/ (function(module, exports) {
const content = 'commonjs.js content' console.log(content); module.exports = content;
/***/ }),
// "./src/index.js": /!****!*\ ! ./src/index.js + 1 modules ! ****/ /! exports provided: content / /*/ (function(module, webpack_exports, webpack_require) {
"use strict"; webpack_require.r(webpack_exports);
// CONCATENATED MODULE: ./src/static.js const content = 'static.js content' console.log(content); // EXTERNAL MODULE: ./src/commonjs.js var commonjs = __webpack_require__("./src/commonjs.js");
// CONCATENATED MODULE: ./src/index.js / harmony export (binding) / webpack_require.d(webpack_exports, "content", function() { return src_content; });
webpack_require.e(/! import() / 0).then(webpack_require.bind(null, /! ./lazy / "./src/lazy.js"));
const src_content = 'index.js content'; console.log(src_content);
/***/ })
/**/ });
@sokra mentioned this:
> This would make scope analysis easier as no variable declaration can leave the function (you could skip parsing these at all)
The 3 modules I mentioned before can be minified in isolation because their body, within the function closure, do not share anything with the outside. For the `commonjs.js` module, the function I am referring to is this:
/*!***!*\ !** ./src/commonjs.js ! *****/ /! no static exports found / /! ModuleConcatenation bailout: Module is not an ECMAScript module / /***/ (function(module, exports) {
const content = 'commonjs.js content' console.log(content); module.exports = content;
/***/ }),
If they were minified separately, it would be useful to leave a hint for Terser indicating that the function should be ignored because it was already minified. This way Terser would ignore the pre-minified modules and only minify the webpack module loading logic around the modules proper:
/**/ /#COMPRESSED*/ (function(module, exports) {
const content = 'commonjs.js content' console.log(content); module.exports = content;
/***/ }),
This approach would still require two Terser passes: one that processed modules after concatenation, and one that processed all chunks at the end including any extra js assets. The difference is the first pass would process much smaller pieces of code, which leads to a better load distribution between workers and less parse-related resource consumption. Then the second pass would be much faster because Terser would ignore all modules that were already minified.
Here potential problem as tree shaking, we can lose some __COMPRESSED__
comments
And we still have big memory usage, because big file still in memory, i think better solution here is searching how we can optimize memory/cpu usage on terser side
The big file might still be in memory, but what consumes resources is not the size of the file itself but rather the result of parsing the file. With the /*#__COMPRESSED__*/
comments that @sokra proposed, Terser would not parse those sections of the file, and thus not consume resources doing so. I have seen terser-webpack-plugin
workers take 800mb and more for very large Webpack chunks (I think maybe 20mb source code), so we are not talking about small amounts of memory here.
It's true that Terser could optimize resource usage. But Terser already has a way to do this: feeding Terser the isolated modules guarantees it will use the least amount of resource usage possible. In this particular case terser-webpack-plugin
provides Terser with a large artificial module composed of many isolated modules, without giving Terser any hint about the isolation boundaries. However optimized Terser might be, it would never perform optimally under these circumstances.
I have seen terser-webpack-plugin workers take 800mb and more for very large Webpack chunks (I think maybe 20mb source code), so we are not talking about small amounts of memory here.
maybe you can provide example? usually memory consumption increased only when source map enabled
/*#__COMPRESSED__*/
is not safe, because we really can lost some comments on difference optimization stages.
https://github.com/vmware/clarity is a project I benchmarked in the past and stored the results in https://github.com/filipesilva/angular-cli-perf-benchmark.
[benchmark] Benchmarking process over 5 iterations, with up to 5 retries.
[benchmark] ng build website --prod (at /home/circleci/project/project)
[benchmark] Process Stats
[benchmark] Elapsed Time: 122160.00 ms (120890.00, 113270.00, 161590.00, 112800.00, 102250.00)
[benchmark] Average Process usage: 1.53 process(es) (3.67, 1.01, 1.00, 1.00, 1.00)
[benchmark] Peak Process usage: 8.40 process(es) (36.00, 3.00, 1.00, 1.00, 1.00)
[benchmark] Average CPU usage: 156.52 % (221.59, 144.47, 132.13, 140.85, 143.58)
[benchmark] Peak CPU usage: 1218.22 % (3700.00, 780.00, 544.44, 533.33, 533.33)
[benchmark] Average Memory usage: 1198.62 MB (1405.18, 1134.82, 1208.82, 1088.15, 1156.14)
[benchmark] Peak Memory usage: 2335.99 MB (4962.54, 1781.51, 1672.10, 1604.10, 1659.72)
The important part is this:
[benchmark] Peak Memory usage: 2335.99 MB (4962.54, 1781.51, 1672.10, 1604.10, 1659.72)
The first time the build ran, it took 4962 MB of RAM. Subsequent times it only took around 1700 MB because it was using the terser-webpack-plugin
cache so it didn't run again.
You can reproduce these results by cloning the repo, adding circleci to it, and uncommenting the clarity-node_10
job so it runs. You can also run it manually by following the circleci config commands.
I understand that the /*#__COMPRESSED__*/
might have kinks. It was only just mentioned and there wasn't a lot of design gone into it. But it seems to be an worthwhile approach.
I hope we can agree that the performance of terser-webpack-plugin
limits how large chunks can get before you hit the memory limit of the machine. From those benchmarks it seems like minification currently can consume double the memory of everything else in the build. This makes it the largest contributor, and the one where it is most worthwhile to pursue optimizations.
At some point a user might need to turn off parallelization to reduce memory because their CI machine doesn't have enough. Then they have to turn off source maps too. Lastly they have to artificially split chunks. None of these are things a user wants to do for their app, it's things they have to do because otherwise the build will fail.
Thanks for repo, i still think we can optimize terser, potentially terser can split source file on parts for reducing memory usage (he already parse code so it should be not hard). in fact, this should be done on the terser
side. You still will get same error when will use terser without webpack. Anyway we can experiment with difference approach and any feedback/PR welcome.
@filipesilva the multiple pass approach is what I'm using in my project with 400+ module files. I'm using terser-loader
to process module files with a set of custom rules for each file. Then I use terser-webpack-plugin
to optimize the final result. The 'compressed' token could be added to Terser, but you could also re-organize your code into modules files that will be compressed or uncompressed.
This does not address memory consumption since the final output file is still processed and the larger the file, the more memory consumed. That is a Terser specific issue and would likely need some significant changes to solve it.
Any help on finding out where Terser is using too much RAM are appreciated. I haven't gotten the time to learn how people optimize RAM usage these days and get Terser even through the simplest of testing/inspector stuff.
In the meantime I'm switching some stuff to use bitfields for CPU reasons, I guess this will save a little bit of RAM as well.
I don't see Terser going multiprocess unless it somehow gains bundler abilities. Which is not off the table. However I really feel that optimising modules one by one could be really beneficial, and parallelism would be better too. Computers with 4 cores and just 2 chunks are just wasting 50% of potential CPU time. Even if the RAM story in Terser goes perfect, that's still a lot of wasted CPU!
WRT already-compressed code I think something like the annotation @filipesilva mentioned would be pretty cool. Terser (and UglifyJS as well) historically sucks at re-compressing compressed files. Oh, the mysteries life has for us. I have zero idea of why that might be happening.
@fabiosantoscode i don't think we have problems with cpu usage, only memory, i think it can be easy debugging, just create big file (or get from reproducible test repo above) and run terser using own cli with --inspect
and profile memory (for example terser return AST in options when you use minify
, but this ast no need so we can reduce memory usage to remove ast).
Example of code (we have terser ast in options
):
var code = {
"file1.js": "function add(first, second) { return first + second; }",
"file2.js": "console.log(add(1 + 2, 3 + 4));"
};
var options = { toplevel: true };
var result = Terser.minify(code, options);
console.log(result.code);
// !Look here!
console.log(options);
I think there are a lot of small optimizations on terser side and they can potentially reduce memory consumption
@J-Rojas in your setup I believe you're still running Terser over the modules twice, which is what I'd like to avoid in order to reduce the resource usage.
@fabiosantoscode in the webpack world, terser-webpack-plugin
already provides very good parallelism with cache. Sometimes we have users pointing out that builds take a long time but that's secondary to builds failing due to hitting memory limits.
@evilebottnawi I'll get a sample of a medium sized bundle and a large bundle, profile CPU and memory usage, and open an issue at the terser repository. Maybe there's some low-effort optimizations that can be done that yield significant benefits.
@filipesilva thanks, maybe we can use typed arrays, map and set (weak) in terser, what should be potentially decrease memory usage
We have already pretty much optimized everything on our side, anyway if somebody have ideas PR/feedback welcome
Terser performance tracking issue (https://github.com/terser/terser/issues/478), including a benchmark repo (https://github.com/filipesilva/terser-performance).
@filipesilva cool that you did that. One have to use a non-minimized terser version to see anything. I looked at the profile, but didn't see anything obvious.
TreeWalker.push/pop/has_directive
could be optimized. A prototype chain has O(n) lookup cost. Also seems like directives are already tracked during parsing, so my guess is that this doesn't really have to be tracked again during walking and could instead attached in the parser.DEFMETHOD
method and assign to prototype directly. This has no performance benefit, but would lead to readable function names in the profile. Same for def_eval
def_negate
etc. Would make the source code more readable too.MAP.at_top/last
seem to be unused, so MAP
can be simplified.MAP
never seem to be used with backwards = true
, so splice
and also be removedMAP
can be replaced with Array.prototype.map
, if figure out where do_list
is called with an Object instead of an Array. Is it called with Object at all?The reason for MAP and MAP.splice is that some transformations can return multiple statements or expressions. Otherwise array.map() would be great.
Regarding DEFMETHOD, I do agree with you.
For the other points I'll have a look at each and see what can be done.
However this still doesn't fix the memory usage, just CPU usage. An interesting exercise would be to call Babel or acorn on the unminified chunk and see how much memory they do use and where. Because Terser's memory allocations are concentrated in the parsing phase (creating a ton of AST nodes)
The reason for MAP and MAP.splice is that some transformations can return multiple statements or expressions. Otherwise array.map() would be great.
Oh yes you are right, missed the push.apply
here. Maybe Splice can be replaced by returning a plain array instead of the Splice class indirection.
True @sokra. Probably Array.isArray()
can help here.
@filipesilva today i will release a new version of terser plugin, it is reduce memory usage around 80-90% for big projects, small projects also have memory improving (now we don't create unnecessary workers when files is less than CPU cores + concurrences when files more than workers).
Maybe be you can again run benchmarks https://github.com/webpack-contrib/terser-webpack-plugin/issues/104#issuecomment-536615187 and provide information here?
@evilebottnawi awesome, thanks for letting me know! Once a release is out I can re-run the benchmarks.
@evilebottnawi tried the same project as before, but had to use more recent user code and dependencies, and I also ran it on my machine instead of on CI. The numbers in this comment shouldn't be compared with my earlier comment.
This project produces around 50 chunks, with around five big ones (~1MB) and the rest are small ones (~50KB).
With terser-webpack-plugin@2.3.2
I saw these numbers:
[benchmark] Benchmarking process over 3 iterations, with up to 5 retries.
[benchmark] node --max_old_space_size=2400 ./node_modules/@angular/cli/bin/ng build website --prod --progress=false (at /home/filipesilva/sandbox/clarity)
[benchmark] Process Stats
[benchmark] Elapsed Time: 175596.67 ms (196900.00, 151270.00, 178620.00)
[benchmark] Average Process usage: 1.62 process(es) (2.73, 1.06, 1.05)
[benchmark] Peak Process usage: 14.33 process(es) (14.00, 15.00, 14.00)
[benchmark] Average CPU usage: 166.16 % (226.72, 138.00, 133.77)
[benchmark] Peak CPU usage: 920.23 % (1488.89, 638.46, 633.33)
[benchmark] Average Memory usage: 1343.90 MB (1661.48, 1210.07, 1160.16)
[benchmark] Peak Memory usage: 2452.93 MB (3460.22, 1945.65, 1952.91)
With terser-webpack-plugin@2.3.3
I saw these numbers:
[benchmark] Benchmarking process over 3 iterations, with up to 5 retries.
[benchmark] node --max_old_space_size=2400 ./node_modules/@angular/cli/bin/ng build website --prod --progress=false (at /home/filipesilva/sandbox/clarity)
[benchmark] Process Stats
[benchmark] Elapsed Time: 173516.67 ms (191080.00, 163450.00, 166020.00)
[benchmark] Average Process usage: 1.52 process(es) (2.50, 1.02, 1.04)
[benchmark] Peak Process usage: 8.00 process(es) (8.00, 8.00, 8.00)
[benchmark] Average CPU usage: 160.40 % (211.31, 135.00, 134.88)
[benchmark] Peak CPU usage: 705.68 % (1054.55, 562.50, 500.00)
[benchmark] Average Memory usage: 1350.10 MB (1625.15, 1221.94, 1203.20)
[benchmark] Peak Memory usage: 2466.93 MB (3559.46, 1880.94, 1960.40)
The first number in the parenthesis here is the important one, since it's the resource usage for the first build. The second and third build use the terser-webpack-plugin
cache so they don't end up doing real work.
So for both average and peak memory usage I don't see an improvement with 2.3.3
. Both values are roughly within variation.
Total number of processes used went down, but build time doesn't seem to have really been affected much.
For this specific project it looks like https://github.com/webpack-contrib/terser-webpack-plugin/pull/211 didn't do much difference locally. I imagine it helped on CI the situation described by @cjlarose in https://github.com/webpack-contrib/terser-webpack-plugin/issues/143#issuecomment-573954013, but mostly because the fork problem was fixed.
If you want to try and use the same benchmarking tool for other cases, you can globally install it with npm i -g https://github.com/filipesilva/angular-devkit-benchmark
. That repository contains instructions on how to use it. I put it there because we don't have an official release for it.
Thanks for information, I investigate that in near future, maybe we should improve memory consuming not only in terser plugin :smile:
@filipesilva The measurements that you're reporting are consistent with what I'd expect for upgrading from 2.3.2 to 2.3.3. The memory improvements made in 2.3.3 (specifically #211) reduce the total required maximum heap size because it takes a portion of the code that would allocate and retain large amounts of memory and instead makes it so that we avoid new allocations until necessary (when a worker becomes available) and release references as we make progress.
But what you're measuring is average and peak memory usage (probably RSS) when using a max_old_space_size
of 2400MB. V8 will consume all all of the max_old_space_size
you give it until there's memory pressure, at which point it'll perform garbage collection. This is desirable because garbage collection isn't free and V8 might as well use all of the memory you give it. So if, during the lifetime of your program, at least 2400MB are allocated, peak memory usage will be around 2400MB.
In 2.3.2, it was possible for terser-webpack-plugin
to create so many large objects (and keep references to them) that when V8 reached the max amount of memory afforded to it (around max_old_space_size
), it would try to perform garbage collection, but couldn't free up enough memory to stay within max_old_space_size
. That's what causes Javascript heap out of memory
errors.
In 2.3.3, terser-webpack-plugin
releases references to those large objects as it goes on and processes other assets. That means when V8 reaches max_old_space_size
, it's actually able to identify objects that are unreachable (and therefore candidates for removal), freeing up memory.
So upgrading to 2.3.3 while keeping your max_old_space_size
the same won't really have any benefit in terms of average or peak memory usage. The value of upgrading to 2.3.3 is that it makes it possible to use a lower max_old_space_size
(the default of 1400MB on a 64-bit machine should be fine for most projects so long as parallelism isn't too high). So if you had to increase max_old_space_size
in your project before 2.3.3, you can probably lower it now and use fewer resources. For some concrete benchmarks, I collected some in https://github.com/webpack-contrib/terser-webpack-plugin/pull/206. Although those benchmarks weren't run against the exact code that's in 2.3.3, I can confirm that the results are comparable.
Feature Proposal
terser-webpack-plugin
operates on webpack chunks directly via theoptimizeChunkAssets
compilation hook. At this point individual chunks exist, each containing a collection of modules wrapped in the Webpack module loader. A single chunk can contain many modules.Terser does not understand the indirection provided by the Webpack module loader and will end up optimizing each module individually. Providing a whole chunk to terser will yield the same optimizations as providing the individual modules contained in that chunk.
It's still important to optimize modules as late as possible because Webpack will concatenate modules. In fact, this concatenation is what enables most of the savings with Terser, since that allows Terser to analyse more code in a single module.
So a better place to execute terser would could be one of the hooks below, optimizing the individual modules:
optimizeModulesAdvanced
afterOptimizeModules
optimizeChunkModulesAdvanced
afterOptimizeChunkModules
I don't know which one is better. But all of them seem to provide modules and are around the time
optimizeChunkAssets
runs as well.Feature Use Case
On large builds, individual chunks might be very large and require a lot of memory and CPU to process. In https://github.com/angular/angular-cli/issues/13734#issuecomment-500849058 I benchmark the peak memory usage of several projects and saw that the parallel terser processing can greatly contribute to it.
A concrete example is of a project that used around 1gb memory most of the , and when it spawned processes for terser it had to process several small chunks plus one or two large chunks. The small chunks used between 15 and 80mb memory, but the large chunks used up to 400mb and took much longer to process. By processing a large quantity or smaller modules, worker processes can use less host machine resources on average and spread the load more evenly.