rock-core / vscode-rock

VSCode extension for Rock integration
MIT License
1 stars 1 forks source link

load environment from env.sh (with a watcher for file changes) #12

Closed g-arjones closed 6 years ago

g-arjones commented 6 years ago

The "get environment" strategy we are using to debug ruby packages seems robust enough and bundle's binstubs are extremely slow to load so maybe doing the same for all tasks would improve user experience

doudou commented 6 years ago

We must use autoproj exec. It's always up-to-date, that is will be updated accordingly even if the env.sh has not been updated (yet). In the long run, it would give us the ability to use package-specific environments as well.

If autoproj exec is too slow, then let's try to make it faster.

g-arjones commented 6 years ago

If autoproj exec is too slow, then let's try to make it faster.

Ideas? To be honest, it's barely useable. Not only autoproj exec but all binstubs. In my tests it takes ~10s just to load bundler's stuff. Building a package that is already built takes roughly 20s (to do nothing from the user's perspective).

doudou commented 6 years ago

Building a package that is already built takes roughly 20s (to do nothing from the user's perspective).

I don't have this experience at all ... I'm running at ~1-2s.

doudou commented 6 years ago

Which version of bundler do you have installed ?

doudou commented 6 years ago

I haven't seen such a bad performance with other people here (that use Atom and autoproj) either.

doudou commented 6 years ago

For the record:

$ time autoproj exec ls
> 1,47s user 0,11s system 99% cpu 1,574 total
$ bundler version
> Bundler version 1.16.1 (2017-12-21 commit 0034ef341)
g-arjones commented 6 years ago
arjones@openmediavault:~/flat_fish/dev$ bundler --version
Bundler version 1.16.0

Binstub:

arjones@openmediavault:~/flat_fish/dev$ time .autoproj/bin/alocate drivers/iodrivers_base
/home/arjones/flat_fish/dev/drivers/iodrivers_base

real    0m4.534s
user    0m0.600s
sys 0m0.064s

No binstub:

arjones@openmediavault:~/flat_fish/dev$ time alocate drivers/iodrivers_base
/home/arjones/flat_fish/dev/drivers/iodrivers_base

real    0m0.675s
user    0m0.584s
sys 0m0.084s
doudou commented 6 years ago

I have a theory.

The problem is that once you have loaded an env.sh, the binstubs in .autoproj will reset the environment and exec(). A.k.a., you're loading the env two times.

If that's the case, given that the end goal for me being that we do NOT load the env.sh before starting VSCode, we're fine as-is.

g-arjones commented 6 years ago

Not the case, I used different clean envs to time that.

doudou commented 6 years ago

Not the case, I used different clean envs to time that.

Waitaminute.


real    0m4.534s
user    0m0.600s
sys 0m0.064s

If time is spent neither in user, nor in sys, then where ? Waiting for I/O ?

doudou commented 6 years ago

Not the case, I used different clean envs to time that.

I'm interpreting that as:

  1. fresh terminal
  2. run .autoproj/bin/alocate
  3. fresh terminal
  4. source env.sh
  5. run alocate

Is that right ?

g-arjones commented 6 years ago

s/fresh terminal/different ssh session/g 👍

doudou commented 6 years ago

Re-waitaminute.

Could you try to do both commands again, but twice each ? I want to rule out a cold cache.

(Btw, 4.5s is not 20s. Difference between you and the others or little bit of exageration ?)

g-arjones commented 6 years ago

Could you try to do both commands again, but twice each. I want to rule out a cold cache.

That helps, second time takes ~the same for the binstub and with env.sh.

(Btw, 4.5s is not 20s. Difference between you and the others or little bit of exageration ?)

Difference between me and the others.

If time is spent neither in user, nor in sys, then where ? Waiting for I/O ?

Good point. I don't know what bundler/setup is doing but could this be a problem:

arjones@openmediavault:~/flat_fish/dev$ du -hs ~/.autoproj/gems/
1.4G    /home/arjones/.autoproj/gems/

I have several versions of the same gem (I've been using this installation for a while). Maybe that is slowing bundler down?

doudou commented 6 years ago

Difference between me and the others.

Then ... why the difference ?

Maybe that is slowing bundler down?

Might be. I'm running on a SSD, so it would definitely have less of an effect here. I've asked the other guys at 13 to do some measurement (they're unfortunately still on shitty spinning rust). However, the list of gems are pinned in Gemfile.lock, so bundler shouldn't have to read all of it.

You could try by bootstrapping a fresh install with the --gems-path option to get a separate gem folder.

In any case, if the second time takes the same amount, you're down to 600ms in a hot cache. The real question now is how often does one get a cold cache in the first place. I'm sure you want more, but in my view anything under 2s is not worth hacking endlessly around. Just running cmake takes an order of magnitude longer, not talking about gcc.

doudou commented 6 years ago

You could try a bundler clean, but be aware that it would uninstall gems that are unused by the current workspace but still in use in other workspaces. It is something you should be able to fix with autoproj osdeps, unless it is an autoproj dependency itself, in which case you would need to re-install autoproj first with the autoproj_install script

g-arjones commented 6 years ago

Then ... why the difference ?

No idea.

Might be. I'm running on a SSD, so it would definitely have less of an effect here

I have tried with pretty fast SSDs also and it felt slower with the binstubs but I didn't measure so I might be biased.

you're down to 600ms in a hot cache. The real question now is how often does one get a cold cache

600ms is fine but only daily use will tell how often one gets that.

Let's keep this thread open for while... I will do some investigation and report my findings here.

doudou commented 6 years ago

Once thing I noticed while debugging is that the codepath that resolves the selected package is taken more than once when opening/closing/giving focus to files. Things like this would compound the problem.

g-arjones commented 6 years ago

I have bootstrapped a new install and situation is improved. It seems that the excess of gems was indeed slowing bundler down.