nodejs / roadmap

This repository and working group has been retired.
135 stars 42 forks source link

Should Node.js be VM neutral in the future? #54

Closed mikeal closed 2 years ago

mikeal commented 8 years ago

First and foremost, a bit of a reality check: The "Node.js Platform" is already available on a variety of VMs other than the V8 runtime we ship with:

Because of Node.js' massive ecosystem of packages, educational materials, and mind-share we should expect that in the future this will continue. Part of Node.js going everywhere is that it may need to be on other VMs in environments V8 can't go. There's not much that we can do to prevent this.

So the question becomes: Should Node.js Core move towards being VM neutral and supporting more VMs in the main project?

Some of the advantages would be:

There's a long discussion about how to do this. Without guarantees from all the target VM vendors that they will support this neutral API it could fall on us to make that work. Historically V8 has made drastic API changes that were not supportable through the API nan had implemented.

There's also an open question about how to structure the tree and build and if we should continue sticking V8 and every supported VM in a vendor directory or pulling it in during build time.

Anyway, let's have the discussion :)

@nodejs/ctc

formula1 commented 8 years ago

@ChiperSoft the current problem is node-gyp. I've looked at the source of it and honestly it could probably be easily rewritten in javascript (just is tedious and a pain in the ass).

But back to your question, that is a really good point. Ideally any browser can be installed and node would use one of the browsers javascript engines as a library. But that is probably far more complicated than it sounds.

Twipped commented 8 years ago

Ideally any browser can be installed and node would use one of the browsers javascript engines as a library. But that is probably far more complicated than it sounds.

Thats not very feasible for servers

formula1 commented 8 years ago

Your entirely correct, though at that point you're likely doing a couple of things

pacman -S git
pacman -S chakra
pacman -S nodejs
pacman -S npm

nodejs engine chakra

For server development, I would argue its easier since (usually) you will know exactly what you need

Edit: Basically what I'm saying is if its a newbie then we should expect a browser is installed and maybe its possible to use a library available from it. If its a server, we should expect the admin doesn't need to be handheld. At some point we all needed (or need) to leave the kiddie pool. I'm in that boat with giant C++ projects and webgl (though I'm avoiding it for now)

indexzero commented 8 years ago

+1 :+1:

As a long time observer and sometimes contributor to core itself I thought about this a lot over the years. This is not the first time this discussion has come up. What @rdodev said today struck a cord for me and pushed my thoughts on this to "yes, node should go this direction".

I think before node jumps on "let's do it because it's good" let's ask if they're accidentally overcommitting and a using time that could be better used improving aspects of node.

The reason is because the push for an interop-layer in node will solidify the foundation for one of the biggest (if not the biggest) asks I've heard time and time again from node folks from hobbyist to enterprise: more tooling and introspection that is truly reliable over time.

Whatever the approach and API(s) decided on for this they will be more documented, better maintained, and looked at by more VM developers than they are now. This will as a point of process make them more stable, reliable and accessible for the module ecosystem to build the tooling node needs right now.

bjrmatos commented 8 years ago

While the idea of node with neutral VM implementation sounds great, it can create a big mess/fragmentation in user land.

Imagine an author creates module A, tested in node 6.0, the author is using node with chakra and taking advantage of new ES features only available in chakra.. Finally module A gets published to npm, a user with node 6.0 - V8 try to use module A, Error! Syntax Error.. That kind of experience will feel very annoying, of course the author can use feature detection to support all VM/engines but it is something that doesnt feel rigth in server-side code, module authors will have a lot of work to do to make sure modules support all VM/engines.. eventually a page like caniuse will exists for node and it will feel like a joke :/

Qard commented 8 years ago

There's already significant differences in level of ES6 support just between the recent versions of node, due to V8 upgrades, and that hasn't been causing any major problems.

obastemur commented 8 years ago

Full disclosure: I'm one of the maintainers of JXcore.

If somebody would ask the question should node.js be VM neutral in the future two years ago, I would say, Can we implement our own Javascript engine instead?.

What has been discussed here is a two sided sword

PART 1 - Reality of the day

Microsoft's own shimming for V8 is a proven piece of work to run node.js with multiple engines.

JXcore is a fork of node.js and can be considered as VM neutral in many ways. See downloads page with ChakraCore, SpiderMonkey, and V8 options.

Are JXcore or node-chakra the perfect solution to the problem. No, they are not and there is no perfect solution to this problem(see part2). On the JXcore side, our intention wasn't even bringing SpiderMonkey to node ecosystem or multiple engines. SM was the best option available for iOS and we didn't have resources to continue with a dedicated Javascript engine work.

Long story short. All the dirty work has been done and a node.js app can already run on top of most of the Javascript engines available today anyways.

PART 2

Put other JS engines aside, the real problem here is that the native API breakage right? Unfortunately, it is going to break no matter what. Lets say V8 team has decided to keep the ABI, API intact. How about the execution of underlying Javascript engine itself? (bug fixes, spec changes, new features..) Remember, that API is just a straight bridge to that already changed engine.

For this reason; keeping V8 header files intact as much as we can idea sounds more vulnerable to instabilities than managing own GC independent shimming on top of it.

We have been already discussing the API breakage within nodejs/api work group and AFAIK most agree with the advantages of a v8 independent API for Addons.

Wondering if any one of the given options below make a sound?

My 2 cents

Node.js needs its own dedicated Javascript engine.

bjrmatos commented 8 years ago

@Qard That is because it is predictable looking at the version number, everyone nows that node v4 has more ES features that v0.10/v0.12, but node with neutral VM will become unpredictable, the same version number can be incompatible between engines

kobalicek commented 8 years ago

Here are my opinions:

  1. V8 shim - Bad idea! If you guys decide to keep the current V8 API and make it the new base, then after some time, you will have the same problems that happened in the past. As an example, take 2 year old V8 API and try to create a V8-SHIM that will work with the latest V8. It will be a lot of work and I'm not sure if it's even possible. The V8-SHIM will always have symbol collision with V8, I think it's really a bad idea.
  2. NAN - It was a workaround which I have never liked, personally. I understand the reasons for creating it, but it didn't work long-term either. I consider NAN dead at this point, no matter how much time will be invested into it, it's gonna break again.
  3. Nautral API in C++ (not in C like somebody proposed) - For me this sounds like the best idea, not just for supporting other engines, but for guaranteeing API stability and minimizing direct dependency on V8. And here I mean both node dependency and addons depencency. If the API is implemented in C++ it can be just wrappers around native JS engine classes.
  4. Different features implemented by different JS versions - Node should guarantee that all ES6 features are implemented. That's it. If all engines support all ES6 features then there should be no differences between them. And guaranteeing that future versions of ES will work with node would be even simpler - if some engine is behind a long time node can just deprecate that engine in favor of some other.

I have already checked V8 and spidermoney APIs and creating wrappers that can wrap them both won't be that difficult. I didn't check chakracore, but I guess it's implemented the MS way, which means "stay compatible for decades".

So basically where I see the future now is the neutral API, and how this API is engineered. However, this shoulnd't be done in a hurry. Maybe defining problems and how the neutral API should deal with them (and trying to implement it) would be the best idea. I also think that the neutral API doesn't have to be strictly node dependent, maybe one can use it to create wrappers to his own bindings to any JS engine.

BTW I would like to contribute to the neutral API, I started writing my own, but I think this is much better opportunity to create something really good.

trevnorris commented 8 years ago

@kobalicek

Different features implemented by different JS versions - Node should guarantee that all ES6 features are implemented. That's it.

But, they don't currently. And even if they did what about the new ES2016 features being released? No two VMs are going to have exactly the same support. This is a burden that will be pushed onto the module developers, to choose what VMs they want to support.

kobalicek commented 8 years ago

@trevnorris Node can always maintain a compatibility matrix. My point was, that before switching to neutral API there should be a list of features that will be supported by all engines, and this should be ES6.

ljharb commented 8 years ago

@kobalicek That's far more difficult than you suspect. As the maintainer of the es5-shim and es6-shim, I can assure you that no engine even complies with all of ES5 yet (which was published in 2009 and updated in 2011), let alone ES6 (published last June). The compatibility tables show only a partial snapshot of the variances across engines, and are not at all exhaustive.

I'm hugely in favor of being VM neutral, but it is accurate that it will be an increased burden on both module publishers and consumers - especially if there's no specification for metadata in package.json that tooling can use to identify engine support (obv the "engines" property could be used but everyone would have to agree on a format).

kobalicek commented 8 years ago

@ljharb Yeah, allowing packages to set a required ES version makes sense, it would probably solve many other problems.

BTW I know that there is no engine that fully supports ES6 yet, what I meant is that when the neutral API is finished and working for all engines we may be lucky that they will already support ES6. I believe it will take some time to finish, it's not work for 2 days.

blakmatrix commented 8 years ago

I see this discussion thread as a possible solution to a near-term problem, however, I'm not certain the problem is clearly articulated and defined from this thread alone (i.e. something is missing). If we can't define what the problem is in all its dimensionalities, we are likely to miss addressing some fundamental strategic issues. I believe that leads to the sentiment of others in this thread and the hesitancy towards diverting time towards this perceived issue.

Historically, a solutions success is usually directly proportional to the problem's definition. What I'm hoping to achieve by my comment in this thread is to help resolve the issue... the perceived dissonance which seems to stem from--metaphorically--putting the cart before the horse.

What are we hoping to achieve, how are we going to measure it, and ultimately how are we going to do it?

I personally love this approach to problem defining and eventual solving, while I'm not advocating following this approach, I think it may be beneficial to read through this process as it may trigger parts of the community to think about specifics and scope, and may just lead us all to a greater understanding of the problem and potential solutions.

  1. Define the problem or need
    1. What is the essential problem, and what is it's scope(how far are we willing/allowed to go to solve this vs. leaving it to others to fix)?
    2. What is the desired outcome to addressing this problem (how will we know we solved it)?
    3. Who will benefit and why? (Has the market failed to adress this? if so, why?)
  2. Justify the need.
    1. Does the problem align with our goals/mission/strategy and priorities?
    2. What are the desired benefits and how will we measure them?
    3. How will we know the solution has been implemented?
  3. Contextualize the problem
    1. Have we tried approaching this in some form in the past, and if so, what were they specifically?
    2. What have others tried/done?
    3. What are the internal and external constraints to implement a solution? (Internally, do we have the resources? Externally, do we--should we have the support, authority, and ability(e.g. intellectual constraints) to implement a solution?)
  4. Define the problem statement
    1. Is this a single problem (or is it perhaps multi-faceted)?
    2. What are the requirements that the solution must meet?
    3. Who will we engage to solve the problem?
    4. What information/language should the problem statement include?
    5. What do solvers need to submit?
    6. What incentives do solvers need?
    7. How will solutions be evaluated and submitted?

Recap

It appears the current problem statement might go something like this:

Node.js is increasingly being used on and across many different environments and varieties of javascript engines, however, Officially(I hope I can say that), the Node.js Platform is only available and supported with the V8 runtime. In an increasing fashion and with some effort "the Node.js Platform" has been coming online and being shipped with other JavaScript Engines like Microsoft's Chakra and hardware development boards like the tessel. This trend is likely to continue due to the success of the Node.js vision, implementation, core contributions, and the community. Developers, groups, and organizations may want to use the Node.js platform, but due to technical constraints, it may become difficult for them to utilize the official platform, effectively creating a resource barrier to adoption and/or the creation faux unofficial versions of the Node.js platform. To effectively enable widespread adoption, and facilitate the acceleration of development on the Node.js platform along with it's growing ecosystem and community, we must consider and address the implication of continuing only officially supporting the Node.js platform with the V8 JavaScript engine. Additionally, with the rise of efforts to port "the Node.js platform" to other JavaScript engines and microcontrollers, we must also consider and address how this will affect Node.js and its community, along with adoption/development.

Current Line of Thought

The general consensus seems to be, and through suggestion, a solution including creating an interoperability layer between the Node.js framework and vast array of possible JavaScript Engines/microcontrollers. This is not, of course, the only solution, and nor is the solution necessarily the perfect fit given the length and back-and-forth on advantages and disadvantages and lateral thinking. I believe that further attention to the details and context of the problem is needed based on the digestion of this thread, and likely needs to be much more well defined than what has arisen out of this thread, and potential other solutions explored as tending to start with a solution before well-defining a problem tends to lead to a narrow vision of the problem.

Many stakeholders within this thread are fairly opinionated on the potentiality and execution of this proposed solution, however, consensus doesn't seem to be moving in necessarily any direction. I believe this is due to stakeholders having difficulty understanding the problem from all levels simultaneously and what's honestly really at stake. It would be diligent to define what we are solving for, why, and how we will know we solved it in the end.

trevnorris commented 8 years ago

Another option that I've investigated and found promising is creating a lower level JS API that the existing API can sit on top of. Then the binding point for the VM is on the JS layer, which reduces the native API problem to the public API. This gives more flexibility of how much is ready before release and gives more time to work out the best solution. While each VM would need to keep their source updated, they would be able to use all potential performance techniques that would likely be prevented by using a completely generic API.

Additionally each VM can choose to use as much of the abstracted native API as they wish, but this is not absolutely necessary for initial release.

There are more advantages, like being able to create a more comprehensive set of tests to check for VM compatibility, but not going to get into all of those here.

rdodev commented 8 years ago

I like @blakmatrix 's analytical approach, a lot, in that I think many of us have a "spidey sense" that while making nodejs VM-agnostic is the way to go, but we can't head down that path if we don't put the dots on the i's so to speak. Before making a final decision I would like the working group/community to come up with a punch-by-punch description of the work required, a high level list of risks and what the trade offs will be in relation to keeping node V8-centric. All this to say, let's get the "gut feeling" or "spidey sense" out of the equation and make the decision a rational one.

aayushkapoor206 commented 8 years ago

A node is a point at which lines or pathways intersect or connect. It already has the perfect name for standardisation of API to connect various VMs :smile:

benjamingr commented 8 years ago

@trevnorris : Another option that I've investigated and found promising is creating a lower level JS API that the existing API can sit on top of. Then the binding point for the VM is on the JS layer, which reduces the native API problem to the public API.

+100 to this. If we can get this working, modular and minimize the API surface of actual V8 we have to touch that would be a huge win.

The current approach (shimming V8's APIs) is very problematic as those are a moving target. If we can make a clear layer where implementors bind to Node and they only have to implement that binding it could be kept constant and relatively small.


Native modules: I've found those an absolute pain to write, even with nan it was very frustrating compared to other things like Python. On the other hand maybe it's a win since the vast majority of userland libraries are native in JS because of it.

mcollina commented 8 years ago

I personally do not like offering multiple download for each vm. Moreover, this is an impressive number of builds and packages to prepare.

However, I'm in favor of making node VM-agnostic, but I think distribution should stay and remain single-vm: the "Node" brand should be a natural evolution of what it is now. Giving people option "should I use V8 or Chakra" is not really good, and would probably slow adoption. On the other hand, other vendors that want to ship "node" with a different VM should be an "easy" path to follow to do so. A possible option might be to have a "node internal" thing that is vm agnostic, that other vendor can embed.

jasnell commented 8 years ago

A twofold approach that has been discussed before would be a) abstracting and minimizing the v8 API surface area as much as possible in order to make use of other vms easier and b) providing a kind of certification process that allows others to swap in alternative vms but test and certify a minimum level of functionality that if successfully tested, will qualify it as being "Node compatible". This carries it's own host of issues, to be sure, but it provides a path forward. One of the possible requirements for that could be that the VMs must implement a minimum common set of JS language features in a consistent way, and that any additional features supported or variances in support are enabled only through command line flags. On Jan 24, 2016 10:38 AM, "Matteo Collina" notifications@github.com wrote:

I personally do not like offering multiple download for each vm. Moreover, this is an impressive number of builds and packages to prepare.

However, I'm in favor of making node VM-agnostic, but I think distribution should stay and remain single-vm: the "Node" brand should be a natural evolution of what it is now. Giving people option "should I use V8 or Chakra" is not really good, and would probably slow adoption. On the other hand, other vendors that want to ship "node" with a different VM should be an "easy" path to follow to do so. A possible option might be to have a "node internal" thing that is vm agnostic, that other vendor can embed.

— Reply to this email directly or view it on GitHub https://github.com/nodejs/roadmap/issues/54#issuecomment-174328409.

seishun commented 8 years ago

I don't see any point in adding a "VM-neutral" API, it's only going to make things worse. The V8 API is constantly changing because their devs' idea about the best API is constantly changing as they gain more experience. Is the same not going to happen with the proposed "unified" API? Or is it just going to stagnate to preserve backward compatibility?

ariya commented 8 years ago

FWIW there is no significant progress on Nodyn in 2015. Even its README says so:

This project is no longer being actively maintained.

trevnorris commented 8 years ago

@jasnell (b) is exactly one reason why focusing on a JS low level API as the binding point, and that the existing API can sit on top of, is an advantage. Starting from scratch it can be developed to be more explicit and easier to test compatibility. Then lib/ essentially sits on top of this API that any project can import and use reliably, if the low level tests all pass.

inikulin commented 8 years ago

I know this more likely will not gonna be considered serious as an option, since it's will completely break compatibility for the current native packages, but I would like to bring an attention to this tweet by @mraleph: https://twitter.com/mraleph/status/691266438246572032.

What if we will not expose any native API for native add-ons, but implement some good FFI instead?

The benefits are:

  1. No need for the native API abstraction level at all. No breaks of the js-land and native-land contracts across versions.
  2. FFI API is easier to maintain and test, since it's minimalistic.
  3. JS-native interaction becomes extremely simple: you will not need significant knowledge of C++ or V8 internals to use native code.
  4. Native add-ons can be easily implemented on top of this: pure C/C++ code + JS facade that performs calls to the native land using FFI under the hood.
Fishrock123 commented 8 years ago

@inikulin Only if it is fast enough to support low (processing) power platforms, such as raspberry PI. To date no FFI has been sufficient enough there, although someone I talked to at Node.js Interactive (I forget who?) thought it would be possible..

ghost commented 8 years ago

Regarding the prospective abstraction API, which would render node.js engine-agnostic, would also enable node.js folks to incorporate their own engine in future if necessary and support some more interesting engines such as https://github.com/cesanta/v7 which provides JS for embedded devices: MIPS and such.

IMO, the abstraction API should also mandate at least a subset of front-facing API features for native modules or better a neutral FFI layer.

@Fishrock123, as far as I know, we can use P/Invoke on RP with mono (Unix) and .NET Framework (Windows 10 on RP2) in C# where p/invoke implies FFI in .NET world. Some related work is going on in CoreCLR repo with support of community folks: @benpye and @kangaroo.

kobalicek commented 8 years ago

FFI without JIT is very slow, that simply cannot be the default.

mcollina commented 8 years ago

@trevnorris: @jasnell (b) is exactly one reason why focusing on a JS low level API as the binding point, and that the existing API can sit on top of, is an advantage. Starting from scratch it can be developed to be more explicit and easier to test compatibility. Then lib/ essentially sits on top of this API that any project can import and use reliably, if the low level tests all pass.

How does this "js-only" layer support libraries like imagemagick, leveldb, and the like? It does not seem possible.

orangemocha commented 8 years ago

@inikulin @Fishrock123 I am strong believer of the FFI approach. See https://github.com/nodejs/api/issues/10. As proven by the experiment mentioned there (https://github.com/nodejs/nan/issues/349#issuecomment-110569177), I don't think it has to be slow at all.

kobalicek commented 8 years ago

The "experiment" you posted has nothing to do with FFI.

Zayelion commented 8 years ago

Speaking as someone that doesn't write in C++ and using node-ffi, if that was baked in better to core with documentation over having to pull out code and compile it with extra headers and possible modifications would be amazing. But I think that is a separate issue, if...

FFI without JIT is very slow, that simply cannot be the default.

... is true. Is there evidence to this?

orangemocha commented 8 years ago

@kobalicek it shows a way to invoke native C++ methods from Node. Do you have a different expectation for what FFI should do?

inikulin commented 8 years ago

@orangemocha The idea was to call native methods from the user land, like https://github.com/node-ffi/node-ffi. For reference LuaJIT has fancy FFI implementation: http://luajit.org/ext_ffi.html

orangemocha commented 8 years ago

Defining the call signature from JavaScript is bound to lead to a less efficient implementation. No wonder node-ffi is known to be slow. The approach I am describing requires exporting those definitions from C++, and leveraging the compiler to generate the plumbing code.

RReverser commented 8 years ago

@inikulin I don't really see what problems does it solve in regard to VM neutrality.

At the very least, you need Node to know how to execute JS in given VM, so you need abstract native API for that, and obviously the API for executing JS cannot be called from within JS itself (because you still don't know how to execute it :smile: ), so FFI is unrelated here.

Same for feature flags - how do you disable or enable certain JS feature from within JS if the engine that is executing it needs to already know what to support before running it?

Same question for many other APIs that currently exist on engine level. FFI matters only for 3rd-party native libs, but it doesn't seem to help with APIs to engines theirselves. Those need to be available on native level that wraps and executes JS inside.

inikulin commented 8 years ago

@RReverser Well, at least any API changes in different engines can be tested on the node side. So, any new release will not break packages that use native code.

inikulin commented 8 years ago

APIs to engines theirselves

is this really common? Yes, I know there are some packages that require access to the engine intrinsic API, like fibers, WebWorkers, etc. But still, there is nothing that comes to my mind except that. Most native add-ons I've seen are just wrappers around some existing native code.

orangemocha commented 8 years ago

A FFI approach to the native API doesn't help the interface between Node and the engine, but it allows writing native modules that don't have dependencies on the specific engine APIs. So it doesn't solve 100% of the problem, but it solves a big part of it, making the rest easier to tackle.

RReverser commented 8 years ago

is this really common? Yes, I know there are some packages that require access to the engine intrinsic API, like fibers, WebWorkers, etc. But still, there is nothing that comes to my mind except that. Most native add-ons I've seen are just wrappers around some existing native code.

Well, that's the entire point of VM-neutral API design discussion. Note that this thread is not so much about add-ons as about API that allows Node itself to communicate with engines (and yes, such API can be reused in native add-ons that require it, but it's just a secondary thing).

mcollina commented 8 years ago

regarding ffi: how would you expose the event loop/libuv? That is a key feature, IoT included. IMHO forcing ffi would mean that everything that uses libuv would become unsupported.

kobalicek commented 8 years ago

@Zayelion This is a first google result of "node-ffi performance":

http://programminggiraffe.blogspot.com/2014/10/nodejs-ffi-vs-addon-performance.html

FFI is unrelated to API neutrality, I think it should be discussed somewhere else.

inikulin commented 8 years ago

FFI is unrelated to API neutrality, I think it should be discussed somewhere else.

Note that this thread is not so much about add-ons as about API that allows Node itself to communicate with engines

But AFAIK the main PITA is that this API leaks into the add-ons. We will not need NAN at least. Maybe I'm wrong, but I see this whole abstraction layer as a big blocker for VM adoption.

inikulin commented 8 years ago

regarding ffi: how would you expose the event loop/libuv?

Is there any particular example when it's required?

RReverser commented 8 years ago

Maybe I'm wrong, but I see this whole abstraction layer as a big blocker for VM adoption.

Personally I don't see why (and why it's bad to have VM APIs available in modules when they need it for more advanced things like those you mentioned).

kobalicek commented 8 years ago

FFI just cannot replace VM-API, period :-)

RReverser commented 8 years ago

FFI just cannot replace VM-API, period :-)

Agreed, they're somewhat unrelated. @inikulin if you want to improve FFI performance, it's worth contributing to node-ffi, but that won't help with Node<->VM abstracted communication.

orangemocha commented 8 years ago

regarding ffi: how would you expose the event loop/libuv?

That is a good question and probably relevant to this issue. However, I think it's independent of using ffi vs another engine abstraction layer.

To answer your question, we could continue exposing libuv to native modules that use FFI.

mhdawson commented 8 years ago

In response to the original question I see being VM neutral as a good thing.

From my understanding of the discussion so far, it seems like most of the discussion is around issues in how we get there and the level of effort as opposed to if we think its a good thing or not.

Since its likely to take a lot of work to do it right and consequently a longer time to complete I think looking at defining some incremental steps that move us in the right direction and have other benefits would be a good way to start.

For example, a VM neutral API for modules as discussed earlier would help to avoid having to update modules when there are new V8 versions. Even if this API only covered a subset of the V8 APIs, if that subset covered the interfaces most commonly by native modules it could help significantly. It would be clear to native module developers when they went outside the VM neutral API that they would more likley have to update when new V8 versions came out.

We could start incrementally by looking at a small set of the most commonly used APIs and experiment by implementing the neutral API and seeing what the challenges would have been in supporting that subset across V8 versions. We could also evaluate if there are any performance issues. I think this is along the lines of what @kobalicek described in his option 3 above although I think we should investigate further whether C or C++ is the best choice. Some concrete next steps would include:

The specific implementation from those steps would quite likely not end up being what we would use in the long term but would give us hard data/examples of the challenges/issues that we would have to work through. As part of our involvement in the API working group I'm hoping @stefanmb and myself with have time to do some of that over the next few months.

aruneshchandra commented 8 years ago

Some of the advantages would be: • Normalization of the Native API (which currently breaks every major release) • De-facto standard for any JS VM to add Node.js support. • Centralization of test infrastructure would increase stability of non-V8 platforms. • Standardization of API could allow us to adopt new V8 versions outside of breaking major releases.

Adding two more points to @mikeal's original list of advantages

There's a long discussion about how to do this. Without guarantees from all the target VM vendors that they will support this neutral API it could fall on us to make that work.

From ChakraCore’s perspective, we are willing to support the development of a neutral APIs and are ready to have members of our team participate in the implementation efforts.

Qix- commented 8 years ago

Having multiple VMs can help spread the reach of Node.js to platforms beyond what is currently supported

Where is V8 not supported? According to WikiPedia, V8 is supported on IA-32, x86-64, ARM, MIPS, PowerPC, and IBM s390. It's standard C++, so if you can't port GCC to the specific platform and write a cross compiler then you're going to have much bigger problems than having to write your own engine.

Multiple VMs will be bring new perspectives to Node.js and potentially help foster innovation in things such as diagnostic and tooling experience among various other things.

Software doesn't need "new perspectives". That's a fancy way of basically screaming feature creep.

For e.g. in ChakraCore, we are working on developing a new set of diagnostic APIs, which could be made interoperable across VMs and toolsets, and are working on bringing new capabilities such as Time Travel Debugging to Node.js.

This is vaporware until it's developed and released. I'm not personally on the whole "I Hate Microsoft" bandwagon, but you can't ignore Microsoft's past of promising one thing and delivering another.

Develop it, get it stable, release it as open source - make Chakra prove itself a bit.

From ChakraCore’s perspective, we are willing to support the development of a neutral APIs and are ready to have members of our team participate in the implementation efforts.

Then do it. Obviously this is what needs to happen before we start monkey patching javascript engines in.


My general opinion on this remains that including more engines here is going to cause unnecessary bloat and fragmentation, even with a unified API.

This isn't to say Node shouldn't be VM Neutral. That's to say we shouldn't actually include all of these engines directly in our source trees. That includes Chakra. If Node goes VM neutral, leave it up to Microsoft to build against node (not the other way around) and maintain/release their own Chakra-backed distribution.