Open bryphe opened 7 years ago
I completely agree. It's bad to design under false/incomplete assumptions about performance.
My main point was that, it's critical to get all the UI blocking things out of the load path, as much as possible. It doesn't necessarily come down to VimL vs. JS performance, though we should also measure that - but often is more a matter of the massive amount of runtime dependencies JavaScript developers put in the require()
code path. I worry that each color scheme or keyboard configurator plugin will pull in one little dependency on a utility library which brings in many more.
So - all that to say, even if the compilation strategy is not to target VimL, but to target JavaScript - having some sort of staging that allows a condensed representation is very desirable.
I don't know if VimL would be faster than executing remote neovim calls from JS. Perhaps if you could batch them all and send them all from JS, it would be faster than VimL, as long as you do some kind of staging/compiling to a condensed representation that sheds all the node modules dependencies.
It's difficult because no one dependency is the culprit. It's death by 1,000 paper cuts. I strongly suggest going the way of VSCode and saying that if you want to extend the UI, it has to be done according to some API/constraints that allows load time to be optimized. It can still be JS.
I did like the idea of a VimL target because I considered that it actually allows people to migrate to Oni. I would begin by writing my JS/json configuration, letting oni create the output VimL for configuring key bindings, colors, other things - and then use that in my Vim today. Of course when you go through the Oni configuration, you can get all these great UI managers for viewing/editing keybindings etc, but at least people can start migrating their vim config over today before they switch to Oni.
I like how you're thinking about this problem, and you're right we should get a sense of the actual performance differences. Are calls to NeoVim fast? Do they themselves have to go through VimL, or do they end up going right to native code? Can they be batched?
don't know if VimL would be faster than executing remote neovim calls from JS. Perhaps if you could batch them all and send them all from JS,
nvim_call_atomic exists for batching all RPC calls.
Are calls to NeoVim fast? Do they themselves have to go through VimL, or do they end up going right to native code
RPC is quite fast, actualvim proves this. VimL execution isn't involved unless you send VimL to be evaluated. But we currently don't have a lot of native functionality exposed directly as RPC functions.
As part of the performance push, @jordwalke proposed the idea of caching the JS configuration via a raw VimL script:
From #351, quick summary of @jordwalke's idea:
There were also good discussions around this in #20 and #35 . I felt it deserved its own issue since it keeps being sidelined.
In terms of evaluating this proposal, it would be helpful to benchmark the loading times today for the pieces that could be VimL optimized (like
config.js
), and then try out the same strategy using a VimL script, and evaluate the delta so that we can have an understanding of the potential performance gain. We should also understand the exact bottleneck - for example, if File I/O is the bottleneck, we might not save much, if we have to check and see if the file has been changed anyway (either through a hash or metadata).For cases like
config.js
, which is essentially a dictionary, serializing that to VimL would be relatively straightforward. We'd have to think about how to handle richer functionality, like if there was ever a case we have configuration options gated by a function - how would the VimL compilation handle that?One other issue to consider is that VimL loading in Neovim is synchronous. There are potentially ways we could load the configuration in JavaScript asynchronously so that it does not block our time to render. Knowing the bottlenecks for our config loading can help us make the right tradeoffs here.. Hopefully we can get some time to pursue this soon!
As an aside, I'd also like to be able to easily get some of the performance data we need for this - right now it's not easy to answer questions like "How long does it take to load my config?" without digging into the chrome performance tools. React-native actually has a nice example of handling this - there is a built-in performance dashboard - something higher-level like this would be very helpful for Oni too to identify bottlenecks and help us make the investments in the right places.