Open adriaanm opened 11 years ago
This feature is blowing my mind. (a) how are you planning to track the other jobs and (b) shouldn't there be one thing that can hold down the canonical "this build was good", and shouldn't that thing be jenkins?
Right now we start and track the same set of jobs.
The generalization I propose is to have distinct sets: the set of jobs we start, and the set that defines success. Technically, we'd start jenkins start watcher actors for all specified job names in the "monitor" setting, whereas the build would be started by asking jenkins to start the jobs in the other set.
Does this clarify?
I understand the feature, but I feel there must be a simpler solution. Can't the same be accomplished by composing jobs using the flow plugin? It's flexible enough to run jobs in sequence or parallel, to control which ones should fail the build, etc.
I'm not sure I'm explaining it right. The only change to the bot is that instead of monitoring the same set of jobs as it starts, it starts one set and monitors another. That's not very hard to implement.
Sure, we should use the flow plugin (and I propose to do so "using Jenkins as the scheduler"), but if the bot starts the flow job then how does it find the error logs in the downstream jobs? To know which jobs those are, it would have to parse the output of the flow plugin and then still monitor them, which is harder to do than us specifying those jobs explicitly. Alternatively, we stop monitoring the output of jobs, but I find it quite helpful to see in the PR which tests failed.
In any case, the work on the jenkins jobs shouldn't be blocked by this feature, as we can already of course just start a job that uses the flow plugin to start the PR validation work. Once this issue is fixed, we can again monitor all of the individual jobs in the flow and report failed tests.
Very concretely, for scala PR validation the set of started jobs would be: pr-scala-main. The monitored jobs would be: pr-checkin-per-commit, pr-rangepos-per-commit, pr-scala-integrate-ide
I already experimented with a flow-based pr job here: https://scala-webapps.epfl.ch/jenkins/view/pr-validators/job/pr-all-per-commit/
Do you need to parse the output? I thought the build kitty only cares about the result (success/failure). The link takes you to a build, where you have links to sub-builds and one can drill down (a failure requires some manual inspection anyway).
I don't need to, but we currently do. Whenever a build fails and there's partest error output, you'll see it in the failure comment on your commit. This explains why you don't see it: it only happens on scala/scala. But it's quite handy.
I have no idea how I managed to explain all this so poorly, but I :rotating_light: do NOT propose to have the build bot schedule anything :rotating_light:. The only tweak is to have the build bot know which jobs are triggered by jenkins as a result of the job that it starts, so that it can monitor the result of those jobs for error reporting.
Read this first [edited]: Note that I do NOT propose to have the build bot schedule anything. The only tweak is to have the build bot know which jobs are triggered by jenkins as a result of the job that it starts, so that it can monitor the result of those jobs for error reporting.
Original, poor phrasing: I think we should make the build bot more flexible: we should be able to specify which jobs to start and which jobs to monitor for completion.
This way we can create more complex flows (such as for IDE PR validation), where we run three jobs A, B and C for validation, but where job C must run after job B (using Jenkins as a scheduler). All jobs must still complete for the PR to validate in this scenario, but you could also imagine running jobs without monitoring them.