tektoncd / experimental

Experimental Tekton Components
Apache License 2.0
100 stars 121 forks source link

Jenkins Remote Execution Custom Task #697

Open imjasonh opened 3 years ago

imjasonh commented 3 years ago

Opening this issue to collect ideas, discussion, interest, etc., for a custom task controller that executes a Jenkins Job on a remote Jenkins installation, watches it to completion, and reports success/failure, and maybe emits some results.

This would be the reverse of Vibhav's Jenkins Plugin for Tekton, which starts and watches Tekton executions from Jenkinsland. This new controller would let Jenkins users slowly adopt Tekton, either by having their Jenkins workloads kick off Tekton workloads, or now, vice versa. Or perhaps both, horrifyingly. 😨

This custom task could define a new CRD type that defines the Jenkins Job to create, possibly with parameters (and workspaces? Maybe?), and define them in the pipeline spec:

apiVersion: tekton.dev/v1alpha1
kind: Pipeline
...
spec:
  tasks:
  - name: my-jenkins-job
    taskRef:
      apiVersion: example.dev/v0
      kind: JenkinsJob
      name: my-jenkins-job

When run, the custom task controller would look up a example.dev/v0 / JenkinsJob custom resource object named my-jenkins-job, which might look like:

apiVersion: example.dev/v0
kind: JenkinsJob
metadata:
  name: my-jenkins-job
spec:
  job:
    # something goes here, I don't know what exactly

...then send that config to a remote Jenkins installation using the Remote Access API. After submitting, the controller would update the Run with any information about the Job it created, and proceed to poll the Job by repeatedly calling EnqueueAfter like wait-task does, until the Job is complete (or timeout).


Now, the part where I plead for help: I have basically no experience with Jenkins, I've only read documentation, but this seems doable at least as far as I can tell. Input from someone with more experience here would be very useful.

gabemontero commented 3 years ago

@waveywaves - FYI in case you were unaware ^^

@waveywaves == Vibhav :-)

gabemontero commented 3 years ago

@ImJasonH - in case this has not bubbled up in your RH onboarding - a possible historical reference for launching jenkins and jenkins pipelines from k8s (in this case k8s == openshift)

https://docs.openshift.com/container-platform/4.6/builds/build-strategies.html#builds-strategy-pipeline-build_build-strategies

@akram and @waveywaves now own ^^ but I was the original owner

I'm in no way trying to (at least yet) endorse any carry over from all that work to what your are trying to accomplish here, but perhaps we should have a more detailed voice to voice discussion

akram commented 3 years ago

Hi @ImJasonH ,

can you PTAL at this https://github.com/tektoncd/catalog/blob/master/task/trigger-jenkins-job/0.1/README.md ? It should probably do what you are looking for.

cc @chmouel

chmouel commented 3 years ago

There is other jenkins task in catalog which is a bit more generic :

https://github.com/tektoncd/catalog/blob/master/task/jenkins/0.1/README.md

imjasonh commented 3 years ago

Yeah those catalog tasks are a great inspiration, and I think the custom task controller would likely do mostly the same thing.

The difference in this case is that instead of having the Job triggered and watched from one Pod for each ongoing Job, there would be one centralized controller responsible for starting and watching all ongoing Jobs. I would expect this to be more efficient, and more fault tolerant.

Instead of having N containers running effectively job = start(); while true { poll(job) || break; sleep(dur); }, there would be one controller watching for new requests for Jobs, starting them, then adding them to a global queue of requests to poll for job status. If the controller restarts, it can pick up where it left off, and we could even have multiple jobs consuming the queue.

I've previously prototyped a similar controller for running remote builds on Google Cloud Build, and I think this would operate in mostly the same way.

tekton-robot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale with a justification. Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with /close with a justification. If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle stale

Send feedback to tektoncd/plumbing.

imjasonh commented 3 years ago

/lifecycle frozen