Closed bergermx closed 7 months ago
Hi @bergermx, thanks for your suggestion. A possible approach could be to place a script at a predefined location, e.g. bin/post-load-dump
. If present, Geordi could run that script after loading a dump. Inside that script file, you'd implement everything that needs to be done.
However, once you have that script in place, Geordi would provide only a small benefit. So, rather than adding this hook to Geordi, might you just prepare that script and inform developers to run it after loading a dump? You might even define a bin/load-dump
that first loads a dump with Geordi and executes all extra steps right afterwards.
wdyt?
@codener Thanks for the suggestions. IMHO I would prefer the possibility to override geordi dump
with a binstub (like mentioned by @bergermx with the bin/setup
routine). So no dev could oversee the special dump routine.
This would require two changes:
bin/dump
if existing over its own codebin/dump
if calling with the argument --ignore-binstubs
Example bin/dump
:
#!/usr/bin/env ruby
require 'bundler/setup'
def system!(*args)
Bundler.with_unbundled_env do
system(*args, exception: true)
end
end
system!('geordi', 'dump', *ARGV, '--ignore-binstubs')
fix_the_dump
An example chain would look like geordi dump -l staging
> bin/dump -l staging
> geordi dump -l staging --ignore-binstubs
> real geordi dump
call > fix_the_dump
call > finished
@codener wdyt?
In all the years, this is the first time we heard the wish for dump loading hooks. So we can say this is really an edge case, and I do not want to bloat Geordi for it. If fixing the dump is needed, developers will quickly get used to use bin/dump, won't they?
As for your suggestion: I don't like the back and forth between Geordi and the application. Your example chain shows the complexity, and it is really walking in circles. I see these options:
If Geordi should prefer bin/dump over its own routine, that script should do all the work (just like bin/setup does). You can take the dump-loading code of Geordi and place it there, for example. For a known application, it should be quite short.
Alternatively, we might introduce a post-dump hook as suggested before. I am reluctant about this, as it is such an edge case.
My best suggestion is still to have a project-local bin/dump that caters for the project's quirks. Should a developer run geordi dump -l
instead of it, you could build the script such that it can be called afterwards as well. People will get used to it quickly.
This week our team was not complete. I'll discuss with them next week and give you a team update.
An example chain would look like geordi dump -l staging > bin/dump -l staging > geordi dump -l staging --ignore-binstubs > real geordi dump call > fix_the_dump call > finished
I find that control flow very complicated.
Teams with a special need can already make their own bin/dump
script like this, without needing changes from geordi:
geordi dump staging
# fix things in tmp/staging.dump
geordi dump --load=tmp/staging.dump
Thanks for the input. I'm fine to maintain a bin/dump
and "deprecate" the use of geordi in projects that use e.g. encrypted database columns.
After loading a database dump with geordi we usually need to run multiple commands/scripts to make the loaded dump useful for development purposes.
For example: a ruby script that fixes encrypted database columns
I believe this might a common workflow/issue so maybe geordi could offer a better solution for developers. Instead of maintaining a list of tasks in the README that need to be completed after loading a database dump, geordi could have an option to run these project specific tasks automatically.
It could be something like
bin/setup
forgeordi setup
, some sort of hook or just another flag for thedump
command that executes a script which includes all required tasks.What do you think?