MichaReiser / parallel.es

Parallelize your JavaScript Applications with Ease
https://michareiser.github.io/parallel.es/
MIT License
42 stars 3 forks source link

Going further with Webpack #108

Open jefffriesen opened 7 years ago

jefffriesen commented 7 years ago

I spent quite a bit of time rewriting my Node.js scripts to take advantage of the parallelization of this library. There were lots of loops that I could try to optimize, but for simplicity I focused on the most expensive one. While I was rewriting I kept reminding myself that this is what your Webpack plugin does and appreciating the brilliance of Webpack itself (what would have been brilliant was me just trying to make your Webpack plugin with with Node.js). Sometime during that rewrite I found this 2012 article by Paul Graham that was re-posted to Hacker News:

http://www.paulgraham.com/ambitious.html

Frighteningly Ambitious Startup Ideas 6) Bring Back Moore's Law

The last 10 years have reminded us what Moore's Law actually says. Till about 2002 you could safely misinterpret it as promising that clock speeds would double every 18 months. Actually what it says is that circuit densities will double every 18 months. It used to seem pedantic to point that out. Not any more. Intel can no longer give us faster CPUs, just more of them.

This Moore's Law is not as good as the old one. Moore's Law used to mean that if your software was slow, all you had to do was wait, and the inexorable progress of hardware would solve your problems. Now if your software is slow you have to rewrite it to do more things in parallel, which is a lot more work than waiting.

It would be great if a startup could give us something of the old Moore's Law back, by writing software that could make a large number of CPUs look to the developer like one very fast CPU. There are several ways to approach this problem. The most ambitious is to try to do it automatically: to write a compiler that will parallelize our code for us. There's a name for this compiler, the sufficiently smart compiler, and it is a byword for impossibility. But is it really impossible? Is there no configuration of the bits in memory of a present day computer that is this compiler? If you really think so, you should try to prove it, because that would be an interesting result. And if it's not impossible but simply very hard, it might be worth trying to write it. The expected value would be high even if the chance of succeeding was low.

It got me thinking why I had to pick only 1 loop to optimize. Why isn't the computer (or Webpack) doing this for me? Related to this is choosing the optimal number of threads:

https://github.com/MichaReiser/parallel.es/issues/102#issuecomment-301240489

Therefore, the idea is to use a good default that you can override if it is unsuitable.

One could override it using trial and error based on heuristics (like you linked to). This could, in theory, be done by the computer. Later, you even said what I've been thinking in the back of my mind:

In general, parallelizing might help. However, I would first start by profiling your application to identify any bottlenecks.

What I'm getting at is this:

Let's say we can profile the code. We can track of how long each function takes, if it's parallelizable (like mapping over collections) and if those functions are mostly IO or CPU bound.

Then, based on the CPU and memory available, a Webpack plugin could go into the code and rewrite functions to use parallel-es. It would ignore simple, fast functions because the up-front cost of parallelizing is greater than running them single-threaded. It may be optimized just on heuristics, or it may run multiple times in order to find the best solution (like my previous automated machine learning resource I posted before). Or a combination of both.

The Webpack plugin would likely need to output a config file. If the optimizing config file is found next time it runs it uses that. Otherwise it does the more expensive optimization step.

Optimizing based on hardware has a lot of advantages in a Node.js environment where the hardware is known. But there may still be a lot to gain in a browser-based environment. Maybe in most cases we can count on having 2-4 cores with graceful fallbacks. It's something to explore.

Profiling: There are quite a few profiling solutions on npm, but maybe the tool in our toolbox is the Chrome Inspector. The detail of data in that tool is incredible and it's exportable (at least manually). Maybe it could be run and captured programmaticaly.

I realize this is a pretty ambitious project. It's ambitious like in Paul Graham's essay. But projects like Prepack is an ambitious project than has significant implications for web and Node.js performance. Speedy.js also seems like an ambitious project. This seems, if it can work, worth it.

I'd be interested in your thoughts on this approach. Is it a good idea or even possible? (independent of if you had time or interest to take it on)

Thanks.

MichaReiser commented 7 years ago

I like your enthusiasm, and you come up with many great ideas!

First, I want to explain to you that the library intentionally splits the static transpilation into a WebPack and Babel plugin. The former is used to create a single bundle for a web environment. However, the primary work is done in the Babel plugin allowing to write a plugin for other bundlers easily. For Node, I don't think that we need a WebPack plugin since we have an entirely different situation. In the Web, it is preferred to load as many scripts upfront to reduce the number of requests needed (Code splitting is something for the future). On the contrary, requiring new scripts in Node.js is quite common. So, I believe, that for Node.js, a simple, per file transpilation step implemented in Babel is sufficient that generates a new file containing the code to execute in the background.

Next, some comments to your suggestion. I'm not going to say that it is impossible. I think it is possible to implement up to some degree. But static program analysis is quite limited and is often a balance between beeing conservative --- favoring correctness --- and reducing the number of false negatives. Furthermore, static analyzing JavaScript tends to be harder than other statically typed languages (or especially functional languages). But with time at hand, a workable and usable implementation should be possible (but I would start with supporting Node first ;))

Then, based on the CPU and memory available, a Webpack plugin could go into the code and rewrite functions to use parallel-es. It would ignore simple, fast functions because the up-front cost of parallelizing is greater than running them single-threaded. It may be optimized just on heuristics, or it may run multiple times in order to find the best solution (like my previous automated machine learning resource I posted before). Or a combination of both.

Rewriting loops is possible. There are also approaches to estimate the cost of a function without having the need to run it. However, executing a function should result in more precise figures. But, the program needs a way to determine the data over which a loop iterates? So probably the simplest approach is to instrument the whole application (e.g. using Babel istanbul but needs to be extended to add runtimes or perhaps the profile API of browsers?) and let the user run the application. After the run, collect the results and identify the most costly loops. Now comes the next difficulty. The static analysis needs to guarantee that a rewrite neither changes the semantics nor introduces any data races or race conditions (e.g. if the same array index is accessed from different threads). It can also be more subtitle. What if the race condition is caused by a overridden method in a subtype. The subtype might not be available at compile-time, or the test run does not contain any object instance of the subtype.

It might be easier to first focus on automatically rewriting lodash (or underscore) based loops before creating a program that can override arbitrary loops.

Optimizing based on hardware has a lot of advantages in a Node.js environment where the hardware is known. But there may still be a lot to gain in a browser-based environment. Maybe in most cases we can count on having 2-4 cores with graceful fallbacks. It's something to explore.

Do be honest; I never worked in a project where the hardware was the limiting factor. Most certainly because the majority of application that I have written are "boring" web applications. So it seems like you are more experienced in this field.

However, as I believe that this subject is quite complicated because static program analysis is limited in precision (or you have to wait forever) a runtime approach might be easier to achieve. Furthermore, I also believe that it might be non-trivial to use such a tool and maintain the saved configuration (what if a programmer made changes to a program, and, therefore, the line numbers no longer match?). If you take a look at Java or C#, both offer an API similar to parallel.es but have some smart scheduling strategy to determine the optimum of threads at runtime. So I think it would be interesting to see how the performance can be improved by using a "smarter" scheduling strategy, potentially with a work-stealing approach.

I would suggest to split the problem into smaller problems and address each separately. This allows making use of the improvements even if the end goal might never be reached (because of time or technical difficulties). Furthermore, I suggest first to find use cases that show the benefits of each of the described approaches. For example, how much does the performance improve if the optimal number of threads is determined upfront compared to an algorithm run at runtime? For this analysis, the optimum can be determined by hand. Or, what are examples of code fragments that the Babel plugin should recognize and rewrite?

So, I believe your ideas are fascinating and worth further exploration. But since time is always a crucial factor it might be needed to start small (at least from my side).

Cheers, Micha