Open flokli opened 4 years ago
Does your shell.nix use src = ./.
? That can cause a revaluation on every change, causing frequent reloads of the environment.
No, it's already spammy with the following shell.nix
:
let
pkgs = (import (builtins.fetchTarball {
url = "https://github.com/NixOS/nixpkgs/archive/5de728659b412bcf7d18316a4b71d9a6e447f460.tar.gz";
sha256 = "1bdykda8k8gl2vcp36g27xf3437ig098yrhjp0hclv7sn6dp2w1l";
})) {};
in
pkgs.mkShell {
buildInputs = [
pkgs.hello
];
}
We are re-evaluating twice sometimes (which is a bug in the watcher), which is probably why you see it again when hitting enter again.
The other point is that if you use pkgs.mkShell
there’s lots of variables that show up that are just from the nix builder; Ideally we’d improve direnv
to show more relevant stuff first (and maybe to give a hint at how to look at the full diff).
Next steps:
https://github.com/direnv/direnv/issues/68 is relevant here.
$ cat .envrc
export DIRENV_LOG_FORMAT=
eval "$(lorri direnv)"
This changes the output such that you only get direnv: loading .envrc
rather than the changed environment variables printed to stdout.
lorri init
could output this instead of what it currently puts in .envrc
.
This is hiding the problem. nix-shell
is not meant to create development environments but to debug existing derivations. This results in creating an environment that is overly noisy with variables that are never used.
I have started experimenting with another approach to build developer environments. The idea is to instead create a buildEnv that contains all the development dependencies.
First, create a root dir that contains a "/profile.sh" script:
profile.nix
with import <nixpkgs> {}; # TODO: pin as usual
# TODO: this is a pattern that could be encapsulated in a function called `pkgs.mkProfile`
buildEnv {
paths = [ rust nodejs ];
postBuild = ''
cat <<PROFILE > $out/profile.sh
export PATH=$out/bin:\$PATH
PROFILE
'';
}
Then in the .envrc
:
mkdir -p .direnv
# NOTE: this is easily cacheable by using the same approach as lorri
nix-build profile.nix --out-link .direnv/profile
source .direnv/profile/profile.sh
watch_file .direnv/profile/profile.sh
watch_file profile.nix
And here is the new output:
direnv: loading .envrc
direnv: export ~PATH
@zimbatm I think that's a pretty elegant and light-weight approach. It differs from using lorri
in two ways, as far as I can tell:
nixpkgs.nix
and import ./nixpkgs.nix
instead of import <nixpkgs> {}
as in your example, and then change nixpkgs.nix
, lorri
will pick up on the change and trigger a rebuild while the setup you've described will not.lorri
is asynchronous/non-blocking (for better or worse) whereas the approach you're describing is blocking.Have you used this "profile.nix" setup in projects already? What did you find lacking? Perhaps there is some potential future version of lorri
optimised for this type of setup - what should it provide for the best user experience?
Thanks! I have been thinking about this topic for quite a while now.
nix-build-cached
experiment that does just that. It means that direnv will stat()
all the watched file on each prompt so there is a threshold where it becomes painful. But for most projects it's okay.Initially when were were talking about lorri
it was to become a full tool to manage development environments and I feel like it has been stuck in that particular implementation. The implementation has become the end-goal instead. On my end I have a pile of experiments that haven't fully come together either (like the profile.nix
idea). It would be cool to take a step back and think about what the ideal development environment would be.
I have started experimenting with another approach to build developer environments. The idea is to instead create a buildEnv that contains all the development dependencies.
God bless. We need more people thinking about that and throw ideas against the wall!
- The lorri nix file tracing technique can be extracted and applied to direnv directly to solve that. […] But for most projects it's okay.
stat()
s do not scale for big projects, sadly, but we definitely should expose 1) the watch path extraction and 2) the watcher implementation as separate subcommands, so that people can re-use them in other contexts.
The implementation has become the end-goal instead. […] It would be cool to take a step back and think about what the ideal development environment would be.
You are right, we were focusing on a nicely polished command in the last 2–3 months, a research phase is in order again. We should set up a brainstorming session (anyone else wants in?).
It would be cool to take a step back and think about what the ideal development environment would be.
Yes!
We should set up a brainstorming session (anyone else wants in?).
I'm keen.
@zimbatm wrote:
The lorri nix file tracing technique can be extracted and applied to direnv directly
Can you explain how you envisage this?
@zimbatm wrote:
NOTE: this is easily cacheable by using the same approach as lorri
What exactly would you cache here / which approach are you referring to? My understanding is that Nix will only rebuild what is necessary anyway.
If you can trace all the accessed files, it's possible to cache the evaluation for a given output. In lorri, I assume that the loop looks a little bit like that:
cache = record(evaluation)
select(cache.changed?) ; do # this is triggered by epoll watches
cache = record(nix-build -vvvv ...)
end
The cache is a map from files to mtimes and is kept in-memory. The record(nix-build -vvvvv ...)
is a nix-build -vvvvv
where all the touched files are extracted. There is a bit of logic to remove the immutable store paths.
This logic can be changed to serialize the cache to disk.
nix-build-cache
cache = load_cache_from_disk
if !cache.changed? # here we compare the mtimes between the cache and the actual files
# early exit, nothing has changed
exit
end
cache = record(nix-build -vvvv ...)
save_cache_to_disk(cache)
You can find an implementation of this logic here: https://github.com/zimbatm/nix-experiments/blob/master/bin/nix-build-cached
The cache is a map from files to mtimes and is kept in-memory.
Thanks for the pseudocode! I was being silly. lorri does not currently keep track of mtimes; the cache is just a set of paths that is fed to notify
. But in order for this "cache" to be useful across direnv invocations, it would of course have to include mtimes. That's the bit I was missing.
Not sure if this is a direnv or lorri bug:
When using lorri in combination with direnv:
My shell tends to be very spammy. On every new shell line, I see the following output:
Even when I don't change anything, and just hit
Return
, which is quite annoying.When not using lorri, but just
use nix
in my.envrc
, I don't get these messages, only if something really changed.