clearcontainers / jenkins

Used to store the Jenkins based CI configuration files - both for backup, re-creation and history etc.
Apache License 2.0
3 stars 4 forks source link

Add pipeline Jenkins plugin #7

Open jodh-intel opened 7 years ago

jodh-intel commented 7 years ago

This Jenkins plugin will allow us to get a breakdown of where the build time is being spent:

It looks like we can create individual stages for arbitrary commands meaning we can make it as granular as we want ;)

grahamwhaley commented 7 years ago

Just to add some notes here whilst fresh in my mind... I am using a Pipeline stage model to evaluate running metrics tests under Jenkins. The pipeline model is nice, but, I am having 'niggles' when trying to integrate with the github Pull Request plugin (https://plugins.jenkins.io/ghprb) we use to monitor our repos - it sort of works, but in Pipeline 'view' I am having great difficulty setting up the github access tokens (they appear in the Jenkins Credentials area, but they are not appearing as an option in the pipeline config dropdown, and I'm not convinced they are being used for my github accesses). I guess what I'm saying is that I have a feeling to use the Pipeline plugin we may have to change our setup scripts to the pipeline mode, and as I'm having niggles with the Pipeline and PR plugins right now, maybe we have to hold off on this at the moment.

Another note - I also came across the Jenkins 'DSL' setup method, which felt like it could be very appropriate for our complex set of similar repos (https://wiki.jenkins.io/display/JENKINS/Job+DSL+Plugin).

/cc @sboeuf

sboeuf commented 7 years ago

Sorry I cannot help you here since I have never used the combo Pipeline + GHPRB plugin. I am using freestyle jobs instead. BTW, I am not sure if this issue (adding pipeline) is appropriate since we have our CI working properly right now.

grahamwhaley commented 7 years ago

sure, np @sboeuf - I know the pipeline setup is quite different from the freestyle we currently have working. I agree, as we have freestyle working and pretty stable right now, let's leave it that way. If we find longer term that we end up using say pipeline or DSL for the metrics Jenkins, and it shows some real benefits, then we can re-consider if migrating the QA CI makes sense or not.

Yes, it's a shame we cannot, afaik, mix the pipeline view with the freestyle project. I have to wonder if there might be any other plugins that can give us any of that type of functionality (and hence, will leave this ticket open for now as a reminder we might want to go do some research).

jodh-intel commented 7 years ago

I've just had cause to view a complete console log in jenkins. It actually crashed my chromium tab when I tried to copy the text. A manual curl gives a clue as to why:

$ ls -lh consoleText 
-rw-r-----. 1 james james 74M 13 oct.  12:03 consoleText

The pipeline plugin should help speed up what is become a specialist role in "spotting the error" by splitting the build output across multiple (smaller :) files and giving a visual indication which build step failed so we can quickly ignore the chaff.

sboeuf commented 7 years ago

@jodh-intel Let me try to summarize a bit here. Now that you've enabled the debug mode for all our components (which is a good thing ;) ), we have more than 70MiB of logs every time a build is failing. And this is annoying since we want to determine where the tests have been failing. By saying that we should use a Pipeline plugin (I think you mean Parser plugin cause I don't know the relation with Pipeline here), you are kind of working around the problem without solving it. Even if we use a parser, it will never be perfect, and we will have to look into the logs eventually, meaning we are going to make the browser crashing or being super slow. What do we really want, I would say that we need to check pretty quickly where the tests are failing. That's why we should not dump both runtime/shim/proxy/crio logs into the build console output. That way, we keep an output very readable, and with the colors, we get very quickly where the issue is coming from. But at the same time, we want to be able to download the extra logs. In order to do that, I have submitted the following PR https://github.com/clearcontainers/tests/pull/609. This will not dump the logs, but instead it will copy them to a provided path. This path is gonna be the workspace path in Jenkins slave, and by enabling the Artifacts option as a post build option, we are gonna have Jenkins copying those logs from the VM into the Jenkins server. That way, the logs will be available for download. @jodh-intel @grahamwhaley @sameo What do you think about this proposal ?

jodh-intel commented 7 years ago

Hi @sboeuf - firstly, yes, I do mean the pipeline plugin as with that fwics, we get to see visually which stage failed. This coupled with your idea of copying the logs (why didn't we do this from the outset I wonder?) will allow us to trivially identify the stage that failed, and then (presumably?) Jenkins will provide a way to download the logs for the failing stage only.

sboeuf commented 7 years ago

@jodh-intel the thing is that what you ask for would modify completely what we have now since we are not using Pipeline jobs, but Freestyle jobs instead. That means we could use https://wiki.jenkins.io/display/JENKINS/Convert+To+Pipeline+Plugin in order to first convert the jobs, and then maybe identify which part failed.

grahamwhaley commented 7 years ago

I think we are all on about the same page. I'll add some more details to (hopefully) help even more. Yes, @jodh-intel is talking about the pipeline plugin:

but we should be aware that this is not just a 'simple plugin' - it would mean changing our whole workflow style (as @sboeuf says) from freeform to pipeline - I don't think we are quite ready to approach that just yet... The parser plugin @sboeuf references is this one I believe: https://plugins.jenkins.io/log-parser which would give quite a similar log parsing regexp ability similar to the log parser tool @jodh-intel added recently. I do think that might be useful for our logs (read below) as our 'failure pattern' texts seem reasonably, err, 'rich' - so having our own regexp patterns we can adapt over time may pay us back nicely.

Both of these methods though still have the issue that @sboeuf has nicely noted - that we'd still have a single huge log (the pipeline plugin afaict does not split the logs up - it shows which phase failed, but the 'console output' is still just one big file). Thus, @sboeuf suggestion of storing off the logs separately and using the jenkins artifacts method to store log file/fragments back into the server I think may be a good way forwards right now. I also think that would still remain compatible with the pipeline plugin if we move to that in the future to even further refine the ability/granularity to see which 'phase' has failed before digging into the logs.

Hopefully that makes sense - for now log noise reduction (thanks for the PRs @jodh-intel ) and splitting into files and the use of artifacts (thanks @sboeuf ) should move us a long way forwards. In the future we can still consider the pipeline plugin as an option, and I don't see anything 'incompatible' here with that.

sboeuf commented 7 years ago

Nice summary @grahamwhaley !