Checking in the kilolines of generated code can get quite messy, but this can be mitigated by establishing and documenting some sensible rules.
A couple of ideas to get started:
the commits that include generated things should never include anything that was written by hand
"regenerate code/documentation/models" things should be entirely done by self-contained scripts, that are checked into the repo. No arbitrary command line fiddling.
those scripts should perform the full pipeline of:
clean the repo
regenerate the relevant thing and stage it
request for confirmation and commit it
push and create a pull request (should be easy via hub)
the commit as well as pull request messages should be generated automatically and be useful:
specify the name of the script that produced it
include the hashes and versions of the upstream things
BONUS: include summary of changes (e.g. a list of affected namespaces/methods/functions)
we should specify some fictional commit author, while the person responsible for the commit is specified as commiter
BONUS: Give them a name, identity, create them a github user and automate them via CI. Then they could be as bold as to specify themselves as a commiter.
Since we're checking into the repo the json models, we don't need to include their version/hashes: their state is the state of the repo at commit time.
Checking in the kilolines of generated code can get quite messy, but this can be mitigated by establishing and documenting some sensible rules.
A couple of ideas to get started:
hub
)