Open jantman opened 3 years ago
One of the central ideas in this library is to push implementation details to plugins, and leave Stages as a platform upon which various plugins can manipulate. For that reason, can I suggest your UseCase above with a plugin alternative?
MyPlugin
intentionally. I don't have a good name for it, but maybe after the behavior is sketched out, a name will be more clear.@Library(['terraform-pipeline@v5.1']) _
Jenkinsfile.init(this)
MyPlugin.withBeforePlanScript('./bin/download_dependencies.sh')
.withAfterInitScript('./bin/after_init.sh')
.withBeforePlanScript('./bin/before_plan.sh')
.withBeforeApplyScript('./bin/beforeApply.sh')
.withAfterApplyScript('./bin/cleanup.sh', MyPlugin.RUN_ALWAYS)
.init()
def validate = new TerraformValidateStage()
def deploy = new TerraformEnvironmentStage('dev')
validate.then(deploy).build()
This feels like it could possibly be implemented iteratively. Specifically, there are 2 UseCases.
Question on the ./bin/after_init.sh
. As it is, terraform-pipeline runs init twice - once before plan, and again before apply.
Would ./bin/after_init.sh
be expected to run before both? I'd guess yes.
Question on the
./bin/after_init.sh
. As it is, terraform-pipeline runs init twice - once before plan, and again before apply.Would
./bin/after_init.sh
be expected to run before both? I'd guess yes.
In terms of implementation in this project, I would assume yes. Anyone who's using our deprecated internal library and migrating to terraform-pipeline would need to check their code closely, as the old internal library only runs init
once.
re: this being implemented in a plugin
I'll admit that I'm a little unclear on where the line between plugin and "core" either is, or should be. But... looking at what we currently have in TerraformEnvironmentStage:
decorations.apply(ALL) {
stage(getStageNameFor(PLAN)) {
decorations.apply(PLAN) {
sh initCommand.toString()
sh planCommand.toString()
}
}
decorations.apply("Around-${CONFIRM}") {
// The stage name needs to be editable
stage(getStageNameFor(CONFIRM)) {
decorations.apply(CONFIRM) {
echo "Approved"
}
}
}
decorations.apply("Around-${APPLY}") {
// The stage name needs to be editable
stage(getStageNameFor(APPLY)) {
decorations.apply(APPLY) {
sh initCommand.toString()
sh applyCommand.toString()
}
}
}
}
Unless there's some aspect of the implementation, or of Groovy, that I'm missing, there's currently not a way to run post-init, pre-plan, or pre-apply commands here.
One thing that was missing from my description, and was an assumption that I made which may not be obvious, was the ordering of these hooks that I need. Specifically:
terraform init
terraform plan
terraform apply
There are at least a few different ways that I can think of implementing this, but they'll all require some changes to TerraformEnvironmentStage, as it doesn't currently have any way of running user-provided code between init and plan, or init and apply. A few of the
decorate
, that's actually around the individual commands, i.e.:
stage(getStageNameFor(PLAN)) {
decorations.apply(PLAN) {
decorations.apply(INIT_COMMAND) {
sh initCommand.toString()
}
decorations.apply(PLAN_COMMAND) {
sh planCommand.toString()
}
}
}
stage(getStageNameFor(PLAN)) {
decorations.apply(PLAN) {
if(pre_init_hook != null) { sh pre_init_hook }
sh initCommand.toString()
if(post_init_hook != null) { sh post_init_hook }
if(pre_plan_hook != null) { sh pre_plan_hook }
sh planCommand.toString()
if(post_plan_hook != null) { sh post_plan_hook }
}
}
sh
commands. Instead of having commands e.g. sh planCommand.toString()
, have those be classes that return a closure instead of a string, which can be decorated like any other. If that makes sense.If we're concerned with making this into something that can be extended further, I think Option 1 is the most logical. We already have a convention of allowing decoration by plugins, using a constant to identify what's decorated. All that would need for hooks (the way I see/use them) to be implemented, is for every sh
command to allow decorations specific to its command.
For what it's worth, the ability to do "cleanup" hooks, which would run inside a try/finally, is dependent on how this is implemented.
Additional note on the above:
If we went with any of the decorator-based options, such as 1, it would be simple to add a ShellHookPlugin
that could achieve the workflow of option 2 (shell script hooks) for users who only care about that, and don't want to have to define a bunch of closures that just call scripts.
I'm a little unclear on where the line between plugin and "core" either is
There's no hard/fast answer. If I could go back and rewrite things from scratch, my answer would be - "All plugins are not core. Things that are not plugins are core." But that's not how the project is currently structured.
there's currently not a way to run post-init, pre-plan, or pre-apply commands here.
Correct. We'll have to make a change here.
decorations.apply(PLAN) {
sh initCommand.toString()
sh planCommand.toString()
}
If it's relevant, going back to my answer about "what is core", I would reframe this Issue as 2 issues - one issue to make "core" more flexible, and to augment init independently of plan/apply. Then a second issue of building a plugin that actually augments the behavior of init/plan/apply independently of one another.
... <ToBeContinued>
Actually, let me scratch that. This could be implemented without making a changes to TerraformEnvironment, since they're just shell scripts. Though, I don't think this is a solution I'd suggest.
The TerraformCommands have a withSuffix
method. You could add a ; ./bin/mycommand.sh
as a suffix. You'd be combining different commands into a single shell. Perhaps not the greatest.
Continuing on the the issue of adding decorations - I'm digging your first suggestion. It's the most consistent with what's already there, and offers a huge amount of flexibility.
decorations.apply(INIT_COMMAND) {
sh initCommand.toString()
}
decorations.apply(PLAN_COMMAND) {
sh planCommand.toString()
}
I'm less excited about option 2. If we're going to make a fundamental change to separate out init and plan/apply, I'd want to be able to do more than just shells.
Options 3 is interesting. I like the idea, but I think it'd be a fairly big change (I say that only because I'm making assumptions in my head about how it would be implemented.)
The TerraformCommands have a
withSuffix
method. You could add a; ./bin/mycommand.sh
as a suffix. You'd be combining different commands into a single shell. Perhaps not the greatest.
Yeah, I saw that and had considered it but felt like it was rather ugly, and likely to have some unforeseen side-effects.
Agreed on ugly and unforeseen side-effects.
I like your suggestion on calling it ShellHookPlugin
.
That name implies that it's in no way specific to TerraformEnvironmentStage. Can we think through that?
withBeforePlanScript
, withAfterApplyScript
make sense. Thoughts? MyPlugin.withScript(TerraformEnvironmentStage.BEFORE_PLAN, './bin/download_dependencies.sh')
...
.init()
Option 3 feels gross.
Ok, I'm going to begin work with a small-ish PR to add support for decorators around each of the specific terraform commands in TerraformEnvironmentStage (i.e. Option 1 in my original comment and what you mentioned in this comment).
Regarding your most recent comment about ShellHookPlugin
...
I'm in favor of calling it TerraformEnvironmentStageShellHookPlugin
.
On the other hand, I'm not of the same feeling that Option 3 feels gross. In fact, I was thinking of that as a possible foundation of a future hook system, albeit with some additional plumbing that would need to exist in the core and does not yet, to eventually achieve a workflow like...
ShellHookPlugin.runBefore(TerraformEnvironmentStage.PLAN, './bin/download_dependencies.sh')
ShellHookPlugin.runAfter(BuildStage.SOMETHING, './bin/something.sh')
ShellHookPlugin.runBefore(DoesntExistYet.SOME_CONSTANT, './bin/futureproof.sh')
So, maybe this is too pie-in-the-sky, or too meta, or just idiomatically wrong because it's how I'd implement something in Python (my strongest language) and maybe just wrong for Groovy. But the way I'd usually approach this is with some sort of generic hook mechanism:
So... this is sort of an extension of the current decorator pattern... but instead of needing direct access to each class that you want to run a hook for, the hooks are managed centrally. This also makes it a bit easier/cleaner to have plugins that do things with hooks on your behalf, or even plugins that hook into other plugins. I honestly don't know if there's an accepted name for this pattern... maybe reverse pub/sub?
Following through on this thought, the above example of ShellHookPlugin
would be a tiny, almost trivial bit of code, that just converts the string command to a closure containing a sh
step:
class ShellHookPlugin {
public void runBefore(String hook_name, String command) {
Jenkinsfile.runBefore(hook_name, { -> sh command )
}
public void runAfter(String hook_name, String command) {
Jenkinsfile.runAfter(hook_name, { -> sh command )
}
}
Let's elaborate on option 3.
I'm a little worried about using a String as the hook name. There's no way to prevent unexpected collisions. Any set of classes that happen to have the same hook point name will be affected in the same way. And the only way that you'd know that is by inspecting the values of the individual hook names.
I wonder if the key needs to be the class instead of the hook name. You'd have to be explicit, but at least that means the outcomes are predictable. Eg:
# This script is run the TerraformEnvironmentStage, and no other stage.
ShellHookPlugin.runBefore(TerraformEnvironmentStage, './bin/download_dependencies.sh')
^-- the above doesn't say where the hook point should be. But that dove-tails well with your other UseCase of RUN_ALWAYS
. There could be a set of optional params.
# This is generic. Your Stage doesn't have to specify hook points - the script simply runs before
ShellHookPlugin.runBefore(TerraformEnvironmentStage, './bin/download_dependencies.sh')
# This is more specific, keeping the interface open to whatever arbitrary implementations you want to add to your stage
ShellHookPlugin.runBefore(TerraformEnvironmentStage, './bin/download_dependencies.sh', [ hook: PLAN, when: ALWAYS ])
The classes open up the opportunity for a hierarchical implementation.
# Anything that implements the Stage interface will get this
ShellHookPlugin.runBefore(Stage, './bin/do_the_thing.sh')
# Create an arbitrary interface for cross-cutting concerns.
ShellHookPlugin.runBefore(ArbitraryInterface, './bin/things_that_use_interface.sh')
Also - thoughts on whether or not hooks should be additive? Eg:
# ignore the syntax, pay attention only to the outcome
ShellHookPlugin.runBefore(MyStage, './bin/hook1.sh')
ShellHookPlugin.runBefore(MyStage, './bin/hook2.sh')
Should that result in both ./bin/hook1.sh
and ./bin/hook2.sh
being run? Or just ./bin/hook2.sh
?
Ok, very good point about the hook names being strings. So... if the main concern is uniqueness, is there an idiomatic way of achieving this? e.g., in Python, there's an id()
method which returns a unique (to a given running process) identifier of any object instance that it's called on (which happens to be the object's memory address).
However... is there a benefit to the hook identifiers being objects, i.e. singleton subclasses of some common HookPoint class? Are there instances (aside from just documentation generation) where we might find value in being able to enumerate all of the possible hook points? I mean, class names should be unique at least within the package, right?
Re: having optional parameters like your middle example with hook
and when
... that would certainly work. I'm definitely onboard with something like when (ALWAYS|SUCCESS|FAILURE)
being optional. I'm not as sure about the hook point being optional, but I suppose there's a strong and valid argument that "around the entire TerraformEnvironmentStage" is a sensible and obvious default.
Also - thoughts on whether or not hooks should be additive? Eg:
# ignore the syntax, pay attention only to the outcome ShellHookPlugin.runBefore(MyStage, './bin/hook1.sh') ShellHookPlugin.runBefore(MyStage, './bin/hook2.sh')
Should that result in both
./bin/hook1.sh
and./bin/hook2.sh
being run? Or just./bin/hook2.sh
?
Ehhhhh.... I'm not sure there's a terribly clear answer to that. Personally, I think they should not be additive, and that your example should just result in hook2 being run. If you want to run both scripts, there are various ways to do that.
So there's 3 steps here:
if the main concern is uniqueness...
For step 2, it's less about being unique, and more about correctly identifying what should or should not have the hook applied. I don't have to be unique in order to do this. Example:
ShellHookPlugin.runBefore(TerraformEnvironmentStage, './bin/beforeDeploy.sh')
Let's say User X extends TerraformEnvironmentStage. It is an instance of the TerraformEnvironmentStage, but is not identical to TerraformEnvironmentStage. Should the hook apply?
is there a benefit to the hook identifiers being objects
To me, yes there is, but only because I have a particular implementation in mind. Nearly all the plugins in this library have an init()
method, and that's the entry point to registering a plugin with a Stage. Part of this discussion is future-proofing, to allow hooks to work with any Stage. Having the hook identifier as an object offers an interesting opportunity for future-proofing.
ShellHookPlugin.runBefore(SomeFutureClass, './bin/before.sh')
.init()
ShellHookPlugin now knows of the existence of a new Stage - by lieu of the fact that it was passed in. Which means .init()
no longer needs to have a Stages hard-coded in it. I can construct the list of Stages that need to be modified by ShellHookPlugin, based on the Stages that were passed explicitly by the user.
I think I was far too vague when I talked about the hook identifiers being objects...
What I was talking about was something more like, on the stage side:
class TerraformEnvironmentStage implements Stage, DecoratableStage {
...
public static final HookPoint PLAN = new HookPoint()
...
// in the closure
decorations.apply(PLAN)
...
}
And then when it's used:
ShellHookPlugin.runBefore(TerraformEnvironmentStage.PLAN, './bin/before.sh')
i.e. not just having the hook identifiers be an object, but having them be specific instances of some type, which could then (I think? Groovy can do this, right?) be introspected if needed.
Having the hook identifier as an object offers an interesting opportunity for future-proofing.
ShellHookPlugin.runBefore(SomeFutureClass, './bin/before.sh') .init()
I'm a bit torn on this. Once again, maybe this is because of my experience with other common hook patterns... but I'm not sure I like the idea of conflating Classes / Stages with hook points. A hook point, in my mind, is a very specific point in execution where some code is run. Just because in TerraformEnvironmentStage the most sensible default is to wrap all of the body of what's actually being done (init/plan/apply), I'm not sure that location for hooks will necessarily be the most logical in all cases going forward. For example, I can see some uses cases relating to build/testing where the most logical default hook location is after dependencies are installed but before a build/test runs. Essentially, I think that by definition, the exact point in execution where a hook is run is fundamentally important, and I'm not a fan of the idea that classes have "default" hook points, which wrap everything the class does.
Regardless of this larger discussion, I do believe that I'm going to start work on an initial TerraformEnvironmentStageShellHookPlugin (ugh, what a name)...
Agreed on gross name - TerraformEnvironmentStageShellHookPlugin. I'm less concerned about name on this point, and more concerned about intent. If the intent is to be generic and eventually cover all Stages, I'm cool with just calling it ShellHookPlugin, even though for now it only affects TerraformEnvironmentStage.
If the intent is to never extend it beyond TerraformEnvironmentStage - then, I'm actually more fine with the gross name, because at least it makes its intent clear.
Pinning this to v6.0. If there are any fundamental/breaking changes that need to be made to make this feature easier to work with, v6.0 will be the opportunity to do so.
Some of my team's terraform makes extensive use of hook scripts/commands, specifically:
terraform init
terraform apply
terraform plan
Right now we've got #255 "Refactor hooks in TerraformEnvironmentStage", #250 "Provide a hook for arbitrary scripts after 'apply'", and #292 "Provide a hook for arbitrary scripts after 'terraform validate'".
It seems to me that there's need for a more generic hook script mechanism here. #250 gives an example implementation that includes usage like
PostApplyPlugin.run('./bin/run_migrations.sh')
, but it seems to me that this will end up violating DRY.Without much knowledge of the internals or how difficult this would be to implement, I was thinking about something much more like:
In essence, my thought was about - at least for TerraformEnvironmentStage - a more generic way to set hooks, based on methods for setting before/after hooks given a constant for the command to run them before/after. The one piece here that I'm a bit more confused by, in terms of implementation, is how to get a
run_after
hook to execute even on failure; I guess we'd just need to wrap the commands in a try/finally, that may or may not have a hook in the finally...