Open rhuss opened 9 years ago
BTW I've updated the code to only warn about not being able to find Jenkins / Gogs and the like if JENKINS_HOME is defined by default (you can override the environment variable to detect it).
Its a tricky one really; ideally we'd annotate an RC with the git commit / URL / build that created it at the "fabric8:json" time.
Maybe we just need to define an env var or something so that we know its inside an S2I image so we can annotate the json with the S2I image URL instead of jenkins?
BTW why is there no .git repository found I wonder? At a bare minimum we should annotate the git commit ID on all builds?
But then maybe the default binding to the package phase for 'fabric8:json' is not so appropriate becaus of this 'calling out' ? s2i builds don't need this. another use case is building the docker image and run it locally.
s2i does some heavy copying on source files before starting the build, wiping out the local git repo. This will probably will break other builds which e.g. pick up some version numbers from a local git repo, too.
If I understand right, you need the OpenShift API in order to query some meta data (sorry, still not in the details :). But there will be many situation where this is not guaranteed. For 'fabric8:apply' ok, there is supposed to be an API server alive. But for 'fabric8:json' (which creates a JSON file) it should be possible also without. Maybe we can make it switchable via configuration whether the DevOps metadata should be attached or not ?
We could certainly have environment variables / CLI arguments to opt out of generating the metadata.
However we should always strive to add as much metadata to the JSON as we can (e.g. git commit/URL metadata for sure; for a S2I build we should link to the build number/log too).
BTW when running inside Kubernetes, we'll have environment variables to be able to find things like gogs / jenkins; so we won't need to use the Kubernetes REST API directly; though for generating external links (e.g. public links to git / jenkins) thats kinda hard without using the REST APIs - maybe we'd need to pass those into the S2I builder as environment variables?
I agree in attaching as many meta data as possible and available. My concern is about coupling: If you have a general purpose plugin like the fabric8-maven-plugin which refers to a specific feature which might not be available (like this JENKINS_GOGS_USER
), it couples the plugin this other app/feature which makes maintenance harder (when you change devops you to think to update fabric8-maven-plugin, too).
If it would be the other way round, that the fabric8-maven-plugin extract information from its direct context (i.e. pom.xml) and someon picks that up it would stay independent (i.e. can run for sure even when the devops changes).
For me it also feels a bit wrong to put current runtime information obtained from the OpenShift API into a build config which can end up in a Maven repo and reused somewhere else where this runtime is different. It's my feeling that this information should be injected during runtime when or after the app is deployed.
How at runtime of a docker container can you know the git commit, the jenkins build URL and the URL to browse the source code - without putting that metadata somewhere?
Its common practice in CI/CD to put git commit ID, branch, git hosting URL, issues fixed, CI build URLs and so forth inside jar manifests to aid traceability. I'm not sure why OpenShift templates are not any different? They are a natural place to put metadata about source code, commits, builds and so forth to aid traceability from runtimes to the various source / issue tracker / build systems together. I'm all for doing the same with docker images and jar manifests too BTW.
The generic, non release specific stuff tends to be in the pom.xml anyway (the top level URL for git, website, mailing list etc). It just doesn't host release specific stuff.
There's a maven plugin coupling if we did the same thing for jar manifest + openshift template + docker image labels; as each of those plugins needs to know where to pull this metadata. Ideally we'd just define them all as system properties or env vars or a canonical build file or something and generate it by a separate plugin (is that maybe what you were getting at?); so that we don't have to spread this logic between lots of different maven plugins (which might make it easier to enable/disable via profiles or whatever the CI/CD metadata generation).
What I'd like is for each jar (https://github.com/fabric8io/fabric8/issues/5058), docker image (https://github.com/fabric8io/fabric8/issues/4575) and kubernetes template to have all of these annotations in them: http://fabric8.io/guide/annotations.html#continous-delivery-annotations
I'd like to add some more too really; e.g. an annotation to point at a list of all issues resolved in this release (versus the previous one) - https://github.com/fabric8io/fabric8/issues/4888 - along with annotations to point at the browseable javadoc and maven site: https://github.com/fabric8io/fabric8/issues/4924
We already use the concept of @KubernetesModelProcessor in order to define decorator for the generated JSON and cover cases where maven plugin is not enough.
This allows decorating the JSON, without polluting/coupling to the plugin.
We could use a similar concept here.
We can create a decorator which would be responsible for enriching any JSON with CI/CD related information. Then we should find a way to kick the decorator if it is present in the classpath or something like this.
It could possibly work with java.util.ServiceLoader or by scanning for annotating classes or whatever we choose. I think this way we have a win-win.
See also #5090
While doing an s2i test, for a build which has
fabric8:json
attached to itspackage
phase, I see the following error in the log:This is of course correct, since the environment is not set for direct OpenShift API access. I wonder whether
fabric8:json
is supposed to do this (accessing the OS API) or isn't it all about generating a JSON file ?Btw, the warning here are still very disturbing, too. There should be another way to connect DevOps to this plugin so that the plugin stays agnostic to DevOps. Otherwise coupling will kill us sooner or later.