Closed xave closed 2 months ago
@xave, is there any interaction between your idea and the existing --[no-]keep-going
flag?
I don't think so.
I get a Process exited with code: ExitFailure 1
on some broken code in either case. I would imagine that the --keep-going
flag would continue without that behavior. Or if that isn't expected behavior, this highlights the need for this feature request as I don't mind the build exiting suddenly; I just need "other stuff"
to happen whether it fails or succeeds. In this scenario, that "other stuff"
never executes on a build failure.
@xave, if --exec <do_something>
does something if, and only if, the build succeeds, do you want --on-fail <do_something_else>
that only does something if, and only if, the build does not succeed?
Yes.
The following are some notes for myself, thinking about this:
The --exec
commands are proc
-ed at the end of Stack.Build.Execute.executePlan
, presumably they are not reached if an exception is thrown beforehand. I suppose the --on-fail
commands need to be proc
-ed after the exception has been thrown and dealt with, because otherwise there is nothing in the output to process.
-- | Perform the actual plan
executePlan :: HasEnvConfig env
=> BuildOptsCLI
-> BaseConfigOpts
-> [LocalPackage]
-> [DumpPackage] -- ^ global packages
-> [DumpPackage] -- ^ snapshot packages
-> [DumpPackage] -- ^ local packages
-> InstalledMap
-> Map PackageName Target
-> Plan
-> RIO env ()
executePlan boptsCli baseConfigOpts locals globalPackages snapshotPackages localPackages installedMap targets plan = do
logDebug "Executing the build plan"
bopts <- view buildOptsL
withExecuteEnv bopts boptsCli baseConfigOpts locals globalPackages snapshotPackages localPackages mlargestPackageName
(executePlan' installedMap targets plan)
copyExecutables (planInstallExes plan)
-- Deal with the `--exec` commands
config <- view configL
menv' <- liftIO $ configProcessContextSettings config EnvSettings
{ esIncludeLocals = True
, esIncludeGhcPackagePath = True
, esStackExe = True
, esLocaleUtf8 = False
, esKeepGhcRts = False
}
withProcessContext menv' $
forM_ (boptsCLIExec boptsCli) $ \(cmd, args) ->
proc cmd args runProcess_
where
mlargestPackageName =
Set.lookupMax $
Set.map (length . packageNameString) $
Map.keysSet (planTasks plan) <> Map.keysSet (planFinals plan)
Pretty much all exceptions e
thrown are caught and (effectively) thrown again at the end of Main.main
(extracts):
eGlobalRun <- try $ commandLineHandler currentDir progName False
case eGlobalRun of
...
Right (globalMonoid,run) -> do
global <- globalOptsFromMonoid isTerminal globalMonoid
...
withRunnerGlobal global $ run `catch` \e ->
-- This special handler stops "stack: " from being printed before the
-- exception
case fromException e of
Just ec -> exitWith ec
Nothing -> do
logError $ fromString $ displayException e
exitFailure
I think that may mean that --on-fail
needs to be a gobal option. If --on-fail
is triggered by all exceptions, is there any benefit of having it 'within Stack'? If it is not triggered by all exceptions, which exceptions should trigger it?
As one data point, I would use this with the same command in either case,
stack ... --exec foo-bar --on-fail foo-bar
So, it would be more convenient for me to have
stack --exec foo-bar --exec-on-failure
or
stack --exec foo-bar --no-exec-success-only
But, of course, the way you've been discussing is strictly more powerful; I just wanted to mention, in case it's easier to implement for some reason, that I'd be satisfied with a new switch to change the behavior of a single --exec
.
What part of the codebase would one inspect to implement this feature?
@pbrisbin, if what you are (or were) seeking is:
> stack build ... --exec foo-bar --on-fail foo-bar
then why not just chain the commands? (eg PowerShell)
> stack build ...; foo-bar
or (if foo-bar
needs to run in the Stack environment):
> stack build ...; stack exec -- foo-bar
That is, is executing a sequence of commands not simply something for the shell?
@xave, the handling of exceptions currently occurs at the end of Stack.main
. The current code has moved on from what I posted above. It now reads (extracts):
eGlobalRun <- try $ commandLineHandler currentDir progName False
case eGlobalRun of
...
Right (globalMonoid, run) -> do
global <- globalOptsFromMonoid isTerminal globalMonoid
...
withRunnerGlobal global $ run `catches`
[ Handler handleExitCode
, Handler handlePrettyException
, Handler handlePantryException
, Handler handleSomeException
]
That is, is executing a sequence of commands not simply something for the shell?
@mpilgrem your examples (stack ...; foo-bar
) would run foo-bar
once when stack
fails.
% stack build --file-watch --exec 'echo success'; echo 'failure'
building...
built
success
<time passes, file changed>
building...
built
success
<time passes, file changed>
building...
compile failure
failure
[1] %
What we are looking for is to continue to watch after running foo-bar
in response to failure, exactly how --exec
continues to watch after running something on success:
% stack build --file-watch --exec 'echo success' --exec-failure 'echo failure'
building...
built
success
<time passes, file changed>
building...
built
success
<time passes, file changed>
building...
compile failure
failure
<time passes, file changed>
building...
built
success
...
See the difference?
There are indeed various ways to accomplish this in the shell, but they're not that simple. Even --file-watch
and --exec
are technically "something for the shell" for the same reasons (or see entr(1)
), but there is still value in having it.
@pbrisbin, I see: I had missed the --file-watch
context from your original post. The relevant code for that is at the end of Stack.FileWatch.fileWatchConf
:
eres <- tryAny $ inner setWatched
...
case eres of
Left e ->
case fromException e of
Just ExitSuccess ->
prettyInfo $ style Good $ fromString $ displayException e
_ -> case fromException e :: Maybe PrettyException of
Just pe -> prettyError $ pretty pe
_ -> prettyInfo $ style Error $ fromString $ displayException e
_ -> prettyInfo $
style Good (flow "Success! Waiting for next file change.")
I am wondering about the following idea, and interested in your views.
Stack has a 'hook' mechanism for GHC installation customisation: https://docs.haskellstack.org/en/stable/yaml_configuration/#ghc-installation-customisation. The customisation kicks in if you put a sh
shell script in hooks/ghc-install.sh
.
How about another 'hook' mechanism for --file-watch
to customise what it does when the build is complete (including if it ends in failure)? The customisation could kick in if you put a sh
shell script in (say) hooks/file-watch.sh
.
This hook idea seems interesting. How might that affect the way the watch behavior works now? In other words, if we were to move forward with this idea, would everyone have to put a script in that directory to get the watch functionality already provided by stack
today?
Presumably, if that were the case, stack could come with a script that already retains current behavior and a person/team could modify that script to do their custom stuff. So it could end up being the same for users already used to how it works now (i.e. they don't have to care about this change unless they want to care).
I was thinking that if there was no 'hook' shell script, --file-watch
would do what it always does and if there was a script --file-watch
would use it.
I love that idea. We often do things like --exec bash -c "something; complicated && other things"
and being able to put that in a full script would be lovely. I imagine it could work like git hooks, which are passed positional arguments when called, so that we could know some things within in the script, such as if the build failed.
I probably wouldn't call it hooks/file-watch.sh
for a number of reasons: any executable file should do, so .sh
should not be required. I would also personally implement this as a lifecycle of the main subcommand, so hooks/post-build/*
, and have anything executable there run after any stack build
(with or without --file-watch
) -- similarly to how --exec
also runs with or without --file-watch
. Finally, I wonder if hooks/
is too general. Git namespaces its hooks in .git/hooks
, but you are probably intentionally wanting these things version controlled (so .stack-work/hooks
doesn't work). Maybe stack-hooks/post-build/*
?
I probably wouldn't call it hooks/file-watch.sh for a number of reasons: any executable file should do, so .sh should not be required.
Agreed.
@pbrisbin, taking your comments in reverse order, how about:
the location of the file-watch-hook
is configurable, as a non-project-specific configuration? If the specified location is relative, it is assumed to be relative to the project root directory.
on Windows, if the specified hook file ends with .sh
it is assumed it should be run with sh
, otherwise it is assumed to be an executable. On Unix-like operating systems, the specified hook file needs to be marked as executable.
EDIT: In terms of passing information to the hook file, I had assumed the mechanism would be environment variables (as in the case of the GHC installation customisation). Windows limits you to 32,767 characters - but that should be enough pass information about exceptions.
In respect of something broader in its ambition than stack build --file-watch
, the 'problem' as I see it is that, currently, with --file-watch
, all stack build
exceptions are handled in one place and, without --file-watch
, all stack
exceptions (build
-related or otherwise) are handled in another place. There is nothing that distinguishes exceptions thrown during building from other exceptions.
As you have identified 'command chaining' does not work with stack build --file-watch
because the Stack command never terminates, however it seems to me that it is viable outside of that context, because the Stack command comes to an end and the next command can take over.
All sounds good to me, I don't have very strong opinions in this area generally.
I had assumed the mechanism would be environment variables
If that's how the only existing hook works, it makes sense to be consistent. I was just modeling after git
, which (absent other points in the design space) seemed like a good idea.
I've tried to implement this: if not using GHCup to manage Stack, use stack upgrade --source-only --git-branch fix5925
to test it. See https://github.com/commercialhaskell/stack/pull/6597.
Thank you. I’ll give it a spin.
@xave, for your information, I am thinking of releasing Stack "3.1.1" soon, and whether or not it includes this may depend on your experience of it.
Thank you @mpilgrem! I have tested it out as follows:
Added this line to stack.yaml
file-watch-hook: "./sample.sh"
My use-case is automatically populating vim
quickfix with errors upon build failure. So I would run the command like:
stack build --filewatch 2>&1 | tee tmp.txt
And the sample.sh
would do some sort of processing of tmp.txt
on build failure. At the moment, I just have (based off your sample script in the commit):
#!/bin/sh
set -eu
if [ -z "$HOOK_FW_RESULT" ]; then
echo "XXSuccess! Waiting for next file change." <----Sanity check that I could change the default message
else
echo "Build failed with exception:"
echo $HOOK_FW_RESULT
cat tmp.txt >quickfix.txt <---- Some basic modifications of a file
>tmp.txt <---- Some basic modifications of a file
fi
This feature does everything I desire. Thank you.
Addendum: I have tried adding the same to the global stack.yaml
, ~/.stack/global-projects/stack.yaml
, but got no result. In the case of working with a shared repo, it is possible that not everyone would want the same script.
It would thus also be nice to be able to set this in the global stack.yaml
instead of the project one and have it apply to one machine only without having to burden a team or other collaborators.
In my mind, if there is a project-level hook that the team wants, they would commit it to the project repo; otherwise, the individual would put it in the global stack.yaml
.
@xave, thanks for the feedback.
On your addendum, the project-level configuration file (stack.yaml
) in the global-project
in the Stack root is not for 'global' configuration; it is for when you use Stack outside of a project. Non-project-specific options in that file are ignored.
The file (or files, on Unix-like operating systems) for 'global' non-project specific options is config.yaml
.
See: https://docs.haskellstack.org/en/stable/yaml_configuration/#yaml-configuration
I had also tried config.yaml
(with full filepath to the same executable as before). In both cases, got the error:
Warning: File watch hook not executable. Falling back on default.
I'm using ghcup
, so I compiled this stack
from source. Likely some wires crossed w.r.t global config settings. I'll keep an eye out once the feature is released.
Introduction
There are times when one would like to use
--exec
even when the build fails. As an example, imagine runningstack build ... --exec 'do-stuff-with-error-messages-on-stdout.sh input.txt' --file-watch"
. In this scenario, we want to process the stdout that was generated every time--file-watch
causes a rebuild.With the current
--exec
flag implementation, that shell script would only run on a successful build. Yet since it is doing something with the error messages, it would be useful to have it run even on failure as that would ultimately help the programmer make the build succeed in this case.Proposal
The proposal is a new flag
--exec-always
that can execute even if the build fails and also while the--file-watch
flag has been passed. This would go a long way to helping people build their own tooling workflows aroundstack
.