Open jamshidh opened 8 years ago
Yeah, this isn't so great. Asking for confirmation when reinstalling everything could be reasonable.
One issue is that stack is expected to be scriptable. This makes it problematic to add user input to commands that haven't had it in the past (and stack build
is one of the main things used by scripts).
https://github.com/commercialhaskell/stack/issues/1265 will make this much better. Assuming you have built both configurations before, only local packages would need to be rebuilt. We've also discussed having multiple sets of build caches - https://github.com/commercialhaskell/stack/issues/1634
I think if you realize that you typed something wrong then you can press Ctrl-C and rebuild with correct options. One thing that can be done is to help the user realize that via a warning, it need not be interactive Ctrl-C can be used for cancelling.
We do print the reason per file about why a particular file is being rebuilt. We can perhaps extend this to the global level as well i.e. have a reason printed at the start when a global option changes which affect all files. For example if -Wall is added or removed from ghc options it will recompile the whole project again. We can warn the user upfront about it instead of the per file warning which may not be obvious. Not sure how easy or difficult will it be to implement this cleanly.
I am not completely sure the difference between a local file and global, but our build puts around 80 packages in the local subdirs, and insists on rebuilding all when I change the command line options. ctrl-c doesn't help, if I type the command and hit enter, even stopping less than a second later isn't good enough, everything is going to rebuild in the next run, even if I retype the correct line. This is even true after a "-profile" add/remove.
If things changed so that I could do the ctrl-c and rebuild, that would be good enough for me.... It doesn't currently work this way.
Ctrl-C isn't sufficient because the old packages get unregistered. I have an idea! I bet we're unregistering packages in some arbitrary order. We could instead unregister them in a topological order. This way, we first unregister things with the fewest dependencies, so the earlier you Ctrl-C while it's deregistering, the less needs to be rebuilt.
This somewhat depends on unregistering packages to take a while... Maybe if we detect that we're unregistering everything, give a warning like
WARNING: About to unregister all local packages, as everything needs to be recompiled. Use Ctrl-C now if this isn't what you want!
And toss in a threadDelay (1000 * 1000)
. Though, I don't really like having a delay anywhere.. It would allow cancellation while keeping stack scriptable.
I thought about this suggestion a bit, and am not sure if it would work....
The problem is, I believe an unregister is probably very fast, and the majority of the time is in the rebuild/register. And packages have to be installed in dependency order, so everything is going to be unregistered immediately, then rebuilt in order.
For instance, take the simple case of four packages in a dependency chain....
A->B->C->D
You rebuilt, stack does the following....
By the time I hit ctrl-c, it has already unregisterd A, B, C, and D, and is rebuilding A.
Maybe if B, C, and D were flagged to unregister and could be untagged, this would work?
@mgsloan a typical solution to allow scriptability in other programs is that they check whether they are interactive by checking whether they are on a tty.
% echo 1 | tty
not a tty
% tty
/dev/pts/9
See e.g. http://unix.stackexchange.com/questions/26676/how-to-check-if-a-shell-is-login-interactive-batch
Then, a --batch
flag could allow you to force the non-interactive behaviour.
@nh2 Yep, stack already has getTerminal :: HasTerminal r => r -> Bool
for this purpose
I also run into this every now and then, and stack makes me wait not just 10-20 minutes, but more like 1-5 hours to recompile 70-150 packages.
If scriptability concerns are the only thing holding this back, then we could make stack behave as many other CLI tools: have --safe
and --force
flags. And --safe
could be implicit default: meaning stack would halt with a descriptive error if there would be destructive non-undoable changes that it would perform, and then if the user understands the implications they can re-run the same command with --force
appended. And CI scripts, where time is not an object, could specify --force
to begin with.
Related StackOverflow post: https://stackoverflow.com/questions/45770444/what-can-cause-stack-build-to-keep-unregistering-local-dependencies-every-time
What do you think?
I'm continuing to be inconvenienced by this, but I've found an alleviating workaround.
I've started to store de-duplicated and compressed backups of my .stack-work
folders like this:
$ function foo () {local ourpath="$(readlink -f "$1")"; tar cf - "$1" | zbackup --non-encrypted backup ~/.zbackups/backups/bkp-$(date -u +%FT%TZ | sed 's/:/-/g')-$(echo "$ourpath" | sed 's%/%_%ig') }; foo "./.stack-work"
And restore them upon need like this:
$ zbackup --non-encrypted restore ~/.zbackups/backups/bkp-2018-03-24T15-19-59Z-_home_wizek_path_to_.stack-work | tar xf - -C .
Using https://github.com/zbackup/zbackup
I'm pleasantly surprised by the time and space efficiency of this approach, as for even a single .stack-work
folder that weighs 1-3GB, the initial ~/.zbackups/
folder only grows to 100-200MB. And all subsequent snapshots only seem to increase this by 0-10-50MB, depending on the amount of new unique data, I presume.
Disclaimer: This is not yet perfect because this still depends on some kind of foresight or manual diligence on my part: e.g. doing backups before builds where I think unregistering might occur, and/or doing backups after successful builds that changed the dependencies. Doing both is not an issue though at all, because deduplication ensures 0MB space usage increase. At least in some cases it can combat some cursing that used to happen when I started to see "unregistering, unregistering, unregistering".
Since I've been bitten by this again, here is another workaround:
stack build --dry-run 2>&1 | grep "No packages would be unregistered." && stack build
Anyone else still troubled by this issue sometimes?
Any hope for stack becoming more careful before deciding to issue destructive/dangerous commands by itself?
If I accidentally run "stack install" with different options (add/remove -profile, --extra-lib-dirs, etc), everything is reset and rebuilt.... For our project, this can take a long time, 10-20 minutes. I understand why you have to do this, and agree that it is the correct decision.
However, a warning that would let us cancel would be great.... Especially since I often just mistype or forget a particular option (like -profile), and what should have been a 30 second compile suddenly becomes a 20 minute ordeal. I do this a lot....
Something like this would be great: