Closed grahamja closed 3 years ago
Hello. Plugins need to be updated to newer versions. Try overriding defaults in Jenkins CR: master: basePlugins:
Thanks, that worked. However, I tried the same thing (with different versions). How did you come up with those version numbers specifically and where are those requirements coming from?
@SylwiaBrant I've had this issue before just due to the fact that upstream plugins constantly change and then a pod which has run for months fine and booted months ago with no issue, then gets rescheduled and won't boot, because the upstream plugin versions change. I've pinned specific working versions of plugins I want in plugins
and basePlugins
. Do you happen know if the jenkins operator code ignores these versions at launch and just auto updates plugins listed at boot time, or will just always auto-install the latest it needs from basePlugins
? If it does that seems like a big bug.
I've used the latest versions of plugins. If you are using 'latest' tag of Jenkins image the plugins will lose compatibility if the newer image gets downloaded. We've started using specific versions as tags to avoid such situations. The Operator uses default versions of plugins if none are specified by the user in the CR. If you override the base plugins in the CR the Operator will install they versions you specified.
Ive built a custom Jenkins container using an explicit sha256sum pointing to an exact jenkins/jenkins:lts
release, and pinned the explicit plugins
and basePlugins
I know work. Now a few months later the same config breaks at relaunch.
I suspect this is because I need to list every plugin being installed as pinned versions. The only way I know how to do this (and still need to verify it works), is to launch with the few plugins I need, watch the container boot output and scan for something like:
Installed plugins:
ace-editor:1.1
apache-httpcomponents-client-4-api:4.5.13-1.0
authentication-tokens:1.4
aws-credentials:1.28.1
aws-java-sdk:1.11.995
bootstrap4-api:4.6.0-3
bouncycastle-api:2.20
branch-api:2.6.3
...
IE, it looks like this output above is an exact enumeration of every dependency of dependency of dependency which is pulled in, and possibly something which can be declared in order to ensure 100% of every plugin and it's dependency is fixed so things won't break in the future.
You used the jenkins/jenkins:lts image, which points to the latest tag. At the time you declared the plugins the tag which was hidden behind "lts" was working with the plugin set you used. After those few months passed a few images were released and lts started to point at a new image that dropped compatibility with the plugins you declared. It needed newer plugins to function. Don't use jenkins/jenkins:lts, but a specific tag, for example https://github.com/jenkinsci/kubernetes-operator/blob/master/deploy/crds/jenkins_v1alpha2_jenkins_cr.yaml#L9 The newer Jenkins plugins and images will come out but you will still use the image compatible with the versions of your plugins set. And if you want to upgrade, try with specifying newest tag (not lts) and newest plugins.
Essentially I did this:
A few months ago I built a custom docker container from a specific jenkins/jenkins:lts
sha:
# This is the jenkins/jenkins:lts image as of 2021-03-25
FROM jenkins/jenkins@sha256:3647dc7dcf43faf20a612465dc1aed6bf510893ff9724df4050604af80123b85
...
Then I pinned the plugins I specificlly needed in my operator.yml, in addition to the working basePlugins
listed that worked at the time, and launched my custom container from above with it:
master:
containers:
- name: jenkins-master
image: 1111111.dkr.ecr.us-east-1.amazonaws.com/jenkins:1.0.2 # this is the custom container from above
...
plugins:
- name: aws-credentials
version: "1.28.1"
- name: credentials-binding
version: "1.24"
- name: github
version: "1.33.1"
- name: jdk-tool
version: "1.5"
- name: ldap
version: "2.4"
- name: matrix-auth
version: "2.6.6"
- name: multiple-scms
version: "0.6"
- name: pipeline-utility-steps
version: "2.7.0"
- name: role-strategy
version: "3.1.1"
- name: saml
version: "2.0.2"
- name: ssh-credentials
version: "1.18.1"
- name: throttle-concurrents
version: "2.2"
- name: workflow-multibranch
version: "2.22"
- name: jenkins-multijob-plugin
version: "1.36"
basePlugins:
- name: kubernetes
version: "1.28.6"
- name: workflow-job
version: "2.40"
- name: workflow-aggregator
version: "2.6"
- name: git
version: "4.5.0"
- name: job-dsl
version: "1.77"
- name: configuration-as-code
version: "1.47"
- name: kubernetes-credentials-provider
version: "0.15"
This booted fine months ago, but now at relaunch it fails due to plugin errors. I'm still booting off the same container, so the image hasn't changed. I suspect because I've not listed every plugin dependency, now I have issues because dependencies of the plugins I have listed above which pull in other dependencies aren't explicitly pinned and their versions have changed, so now I have issues.
I suspect what I need to do going forward is to get a working minimal plugin list like above, boot the container, then look for the boot output like so:
WAR bundled plugins:
Installed plugins:
ace-editor:1.1
apache-httpcomponents-client-4-api:4.5.13-1.0
authentication-tokens:1.4
bootstrap4-api:4.6.0-3
bouncycastle-api:2.20
branch-api:2.6.3
caffeine-api:2.9.1-23.v51c4e2c879c8
checks-api:1.7.0
cloudbees-folder:6.15
configuration-as-code:1.47
credentials-binding:1.24
credentials:2.4.1
display-url-api:2.3.4
durable-task:1.36
echarts-api:5.1.0-2
font-awesome-api:5.15.3-2
git-client:3.7.1
git:4.5.0
git-server:1.9
handlebars:3.0.8
jackson2-api:2.12.3
job-dsl:1.77
jquery3-api:3.6.0-1
jsch:0.1.55.2
junit:1.49
kubernetes-client-api:4.13.3-1
kubernetes-credentials:0.8.0
kubernetes-credentials-provider:0.15
kubernetes:1.28.6
lockable-resources:2.10
mailer:1.34
matrix-project:1.18
metrics:4.0.2.7
momentjs:1.1.1
pipeline-build-step:2.13
pipeline-graph-analysis:1.10
pipeline-input-step:2.12
pipeline-milestone-step:1.3.2
pipeline-model-api:1.8.4
pipeline-model-definition:1.8.4
pipeline-model-extensions:1.8.4
pipeline-rest-api:2.19
pipeline-stage-step:2.5
pipeline-stage-tags-metadata:1.8.4
pipeline-stage-view:2.19
plain-credentials:1.7
plugin-util-api:2.2.0
popper-api:1.16.1-2
scm-api:2.6.4
script-security:1.77
snakeyaml-api:1.27.0
ssh-credentials:1.18.1
structs:1.23
trilead-api:1.0.13
variant:1.4
workflow-aggregator:2.6
workflow-api:2.42
workflow-basic-steps:2.23
workflow-cps-global-lib:2.19
workflow-cps:2.92
workflow-durable-task-step:2.39
workflow-job:2.40
workflow-multibranch:2.24
workflow-scm-step:2.12
workflow-step-api:2.23
workflow-support:3.8
This output contains a much larger list of plugins which apparently are installed, and appears to be the complete dependency tree. I suspect if I then go back and update my operator.yml plugins
list with the large list from this output, I can hopefully explicitly pin every dependency being pulled in, and thus won't have to worry about plugin versions shifting over time and breaking.
Okay. I understand. It could have been a downtime of the update center or some plugin or its dependency could contain an error. Maybe it was depreciated and stopped working or the dependencies changed. I can't say anything about the reason since it's not a bug in Jenkins Operator, but rather have to do something with the Jenkins itself. Using specific tag with all the plugins specified together with their dependencies should help. I don't see anything that should change then.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. If this issue is still affecting you, just comment with any updates and we'll keep it open. Thank you for your contributions.
Closing as the problem currently has no better workaround.
Describe the bug Jenkins master pod continuously restarts. Logs show errors when loading all plugins (log at end of the issue).
To Reproduce This is with a fresh GKE cluster and using the steps and manifests in the Jenkins operator documentation. https://jenkinsci.github.io/kubernetes-operator/docs/installation/ https://jenkinsci.github.io/kubernetes-operator/docs/getting-started/latest/deploy-jenkins/
kubectl create ns jenkins
kubens jenkins
kubectl apply -f https://raw.githubusercontent.com/jenkinsci/kubernetes-operator/master/deploy/crds/jenkins_v1alpha2_jenkins_crd.yaml
kubectl apply -f https://raw.githubusercontent.com/jenkinsci/kubernetes-operator/master/deploy/all-in-one-v1alpha2.yaml
kubectl apply -f jenkins.yaml
(https://jenkinsci.github.io/kubernetes-operator/docs/getting-started/latest/deploy-jenkins/)Additional information
Kubernetes version: 1.19.9-gke.1400 Jenkins Operator version: 0.5.0
The operator logs show that it is restarting the jenkins-master pod because it notices the plugins are missing.
You can see why in the jenkins master pod log: