Open cillianderoiste opened 10 years ago
This would probably interact badly with the new scopedImport
primop.
In what way would that be "more transparent" than an index (like thecommand-not-found
database)?
To me index seems better, as each channel "version" is immutable. Evaluating thousands of packages for each query will not scale too well.
It's more transparent because it's the same thing as nixpkgs except it's not spread out over multiple files. I think this would also make installing by name faster, and would also take ~/.nixpkgs/config.nix overrides into account, which an index wouldn't. The evaluation doesn't seem to be the bottleneck, otherwise it wouldn't be so fast on a machine with an SSD. On a spinning disk it can take over a minute to start installing something by name, but it's less than 5 seconds with nixpkgs on an SSD.
On a spinning disk it can take over a minute to start Is it faster with a hot cache?
As an option it may be possible to implement import from zip, just like python do. This way scopedImport is not broken, and files are closer together (and optionally compressed), that should make them reading on faster on a spinning disk.
By the way nix-env -qaP works in less than 2 seconds on my i5 with SSD, so I never realized it's slow.
I already implemented zip support a few years ago. If you're interested I can dig up the patch.
It sounds cool to me, but since you've already tried it out but didn't pursue it ... was it not so useful?
I don't have the skills to do anything with such a patch myself, so, don't go digging up the patch on my account.
On SSD the queries seem mostly CPU-limited, but on a slow HDD it could be annoying. (I moved my $HOME to SSD this week to great overall effect, after exchanging it in my notebook instead of DVD.)
The main problem with zip support, iirc, was that the semantics of an Nix derivation attribute like:
src = ./foo.zip;
became unclear. I.e., should this copy the zip file to the store, or the unpacked contents of the zip file?
What about deciding that based on whether you write it as ./foo.zip/
or just ./foo.zip
? Ending with a slash clearly indicates nix should look inside the archive (or one could even use ./foo.zip/.
).
Debugging will be hell with inline behavior.
I have no idea how the parser works, but couldn't all the .nix files be pre-parsed into a compressed binary blob? Then the semantics stay the same, just the loading from disk is optimized.
Time for .nixc
? :imp:
We could pre-compile the stuff into the *.drv files, which is all that most people need. These could be just kept in some squashfs or similar archive with efficient per-file access.
We could pre-compile the stuff into the *.drv files, which is all that most people need. These could be just kept in some squashfs or similar archive with efficient per-file access.
Isn't building drv for every package is very very slow? Remember there is a ~/.nixpkgs/config.nix
, which if changed, triggers full rebuild.
I've just implemented a nix-pack
program that combines most of Nixpkgs into a single file. Results are pretty good: on a slow HDD with a cold cache, nix-env -qa
time went down from 36s to 6.3s. On a warm cache, the packed version is also slightly faster (on the order of 1.70s vs. 1.65s).
The downside is that it makes evaluating single packages slower. E.g. nix-instantiate --dry-run -A firefox
goes up from 0.23s to 0.70s because it now needs to parse all of Nixpkgs. The zip file approach would be better in that respect.
That sounds very good for the common nix-env -qa
and nix-env -i
usage.
@edolstra, if nix-pack
can combine .nix files keeping them readable. Can it extract specified attribute and it's dependencies? Something like nix-instantiate --eval
, but producing something that can be read back by nix (i.e. eval can't do it for recursive things). It would be useful for "freezing" reproducible environments (something like pip freeze
do for python virtualenvs).
I marked this as stale due to inactivity. → More info
I closed this issue due to inactivity. → More info
Since evaluation on an SSD is way faster than when using a spinning disk, it suggests that the overhead for running queries (
nix-env -qa
) is caused by the time it takes to traverse the filesystem rather than the actual evaluation. If we could inline all imports in nixpkgs when creating a channel then only one file would need to be opened which would probably be way faster on a spinning disk and an SSD, but it would also be more transparent than having some kind of cache for the sake of querying.