Closed mbach04 closed 2 years ago
Hi,
The application does not run as root, but as a dedicated user; the root privileges are dropped as part of entrypoint startup. This can be seen by exec
ing into the pod:
# kubectl exec -ti jira-0 -- bash
root@jira-0:/var/atlassian/application-data/jira# ps axu
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 2508 608 ? Ss Jul20 0:21 /usr/bin/tini -- /entrypoint.py
root 6 0.0 0.0 7500 3368 ? S Jul20 0:00 /bin/su jira -c /opt/atlassian/jira/bin/start-jira.sh -fg
jira 8 0.0 3.3 6476944 536608 ? Ssl Jul20 22:45 /opt/java/openjdk/bin/java -Djava.util.logging.config.file=/opt/atlassian/jira/conf/logging.properties -Djava.util.
root 117 0.3 0.0 7244 3948 pts/0 Ss 03:14 0:00 bash
root 124 0.0 0.0 8904 3292 pts/0 R+ 03:14 0:00 ps axu
root@jira-0:/var/atlassian/application-data/jira#
You can see the logic in the docker image repository: https://bitbucket.org/atlassian-docker/docker-shared-components/src/4282e5197509f3144108169be8b32c674ec39679/image/entrypoint_helpers.py#lines-106
Regards, Steve
@tarka @mbach04 still it may make sense to add securityContext.runsAsUser. I guess in a lot of enterprise clusters pod security policies enforce this. Please note, I am not talking about the app itself (yes, Jira runs as a dedicated user) but a container. I have opened a feature request some time ago.
I explain it to my security personnel as well, but always point to this:
To preserve strict permissions for certain configuration files, this container starts as root to perform bootstrapping before running Jira under a non-privileged user account. If you wish to start the container as a non-root user, please note that Tomcat configuration will be skipped and a warning will be logged. You may still apply custom configuration in this situation by mounting a custom server.xml file directly to /opt/atlassian/jira/conf/server.xml
https://hub.docker.com/r/atlassian/jira-software
I think this is more of a container fix than a helm chart deployment fix.
For example, Twistlock picks up this finding regardless since the container user is root.
@bordenit good point. With that being said though, I think it won't hurt having SecurityContext fully configurable and flexible. This is one of the issues I have immediately run into when deploying to OpenShift where security context constraints are strict out of the box.
@bianchi2 Good point as well. SecurityContext settings are common configurable helm values.
@bordenit we are planning to tackle https://github.com/atlassian-labs/data-center-helm-charts/issues/80 soon to provide more flexibility. Would that be sufficient for your use case?
Changing the container to non-root will be more work on the image side.
@nanux have not tested it, but seems like it would be an improvement/mitigation.
@bordenit , @mbach04, Do you think the enhancement in This PR (SecurityContext) can be used as a solution and mitigating this issue regarding security concerns?
Hi all, on account of there now being a mechanism in place for configuring the SecurityContext
(see discussion and PR mentioned here) Im closing this issue for now. If there are still concerns on this front we can re-open it.
Hello , May I know what is the solution to it .. I am still having trouble since we can only deploy non root images into our k8s clusters. I tried by adding fsgroup and runAsUser fields to SecurityContext .. with that its failing (while adding DB configuration . Caused by: java.lang.IllegalStateException: java.io.IOException: Read-only file system
@vraghun you may want to check permissions in your shared-home directory (looks like Jira can't write to dbconfig.xml?). Also, when running as non root, set https://github.com/atlassian/data-center-helm-charts/blob/main/src/main/charts/jira/values.yaml#L753 to true
Thanks for the response . Actually the shared-volume should be accessible by jira user. I tried to create a test file in the shared-volume directory.
jira@jira-poc-0:/var/atlassian/application-data/shared-home$ pwd /var/atlassian/application-data/shared-home jira@jira-poc-0:/var/atlassian/application-data/shared-home$ touch test jira@jira-poc-0:/var/atlassian/application-data/shared-home$ ls -l test -rw-r--r--. 1 jira jira 0 Apr 23 04:36 test jira@jira-poc-0:/var/atlassian/application-data/shared-home$
I made the generateByHelm: true , but still seeing the errors . 2024-04-23 04:31:35,573+0000 main ERROR [o.apache.jasper.EmbeddedServletOptions] The scratchDir you specified: [/opt/atlassian/jira/work/Catalina/localhost/ROOT] is unusable. 2024-04-23 04:31:35,679+0000 main INFO [c.a.jira.startup.DefaultJiraLauncher] Stopping launchers 2024-04-23 04:31:35,790+0000 main ERROR [o.a.c.c.C.[Catalina].[localhost].[/]] Exception sending context destroyed event to listener instance of class [com.atlassian.jira.startup.LauncherContextListener] java.lang.NullPointerException at com.atlassian.jira.startup.ClusteringLauncher.stop(ClusteringLauncher.java:142)
I am trying to fix scratchDir error as per https://confluence.atlassian.com/jirakb/jira-server-throws-unable-to-create-directory-for-deployment-error-on-startup-389781040.html , but not able change permission for $JIRA_INSTALL directory , Because I logged in with non root user ( that is only permit in our k8s clusters ) . jira@jira-poc-0:~$ ls -ld /opt/atlassian/jira/ dr-xr-x---. 1 jira root 4096 Mar 26 03:55 /opt/atlassian/jira/ jira@jira-poc-0:~$ cd /opt/atlassian/jira/ jira@jira-poc-0:/opt/atlassian/jira$ df -h . Filesystem Size Used Avail Use% Mounted on overlay 307G 9.1G 286G 4% / jira@jira-poc-0:/opt/atlassian/jira$
Can you share your securityContext? You need to make sure that it's run as jira user.
Here is my statefulSet :
apiVersion: apps/v1 kind: StatefulSet metadata: name: jira-poc labels: helm.sh/chart: jira-1.16.1 app.kubernetes.io/name: jira app.kubernetes.io/instance: jira-poc app.kubernetes.io/version: "9.4.10" app.kubernetes.io/managed-by: Helm
spec: replicas: 1 serviceName: jira-poc selector: matchLabels: app.kubernetes.io/name: jira app.kubernetes.io/instance: jira-poc template: metadata: annotations: checksum/config-jvm: df119c0a2faa360aede67387268925c03493cdece0db2853e27504a18a11beb2
labels:
app.kubernetes.io/name: jira
app.kubernetes.io/instance: jira-poc
spec:
serviceAccountName: jira-poc
terminationGracePeriodSeconds: 30
hostAliases:
securityContext:
fsGroup: 2001
runAsGroup: 2001
runAsUser: 2001
initContainers:
- name: nfs-permission-fixer
image: alpine:3.19
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0 # make sure we run as root so we get the ability to change the volume permissions
resources:
limits:
cpu: 400m
memory: 1Gi
requests:
cpu: 150m
memory: 512Mi
volumeMounts:
- name: shared-home
mountPath: "/shared-home"
command: ["sh", "-c", "(chgrp 2001 /shared-home; chmod g+w /shared-home)"]
containers:
- name: jira
image: "atlassian/jira-software:9.4.10"
imagePullPolicy: IfNotPresent
env:
- name: ATL_TOMCAT_SCHEME
value: "https"
- name: ATL_TOMCAT_SECURE
value: "true"
- name: ATL_TOMCAT_PORT
value: "8080"
- name: ATL_FORCE_CFG_UPDATE
value: "true"
- name: ATL_DB_TYPE
value: "mysql8"
- name: ATL_DB_DRIVER
value: "com.mysql.cj.jdbc.Driver"
- name: ATL_JDBC_URL
value: "jdbc:mysql://<dbhost>/<dbname>"
- name: ATL_JDBC_USER
valueFrom:
secretKeyRef:
name: db-credentials
key: username
- name: ATL_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
- name: CLUSTERED
value: "true"
- name: JIRA_NODE_ID
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: EHCACHE_LISTENER_HOSTNAME
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: EHCACHE_LISTENER_PORT
value: "40001"
- name: EHCACHE_OBJECT_PORT
value: "40011"
- name: SET_PERMISSIONS
value: "true"
- name: JIRA_SHARED_HOME
value: "/var/atlassian/application-data/shared-home"
- name: JVM_SUPPORT_RECOMMENDED_ARGS
valueFrom:
configMapKeyRef:
key: additional_jvm_args
name: jira-poc-jvm-config
- name: JVM_MINIMUM_MEMORY
valueFrom:
configMapKeyRef:
key: min_heap
name: jira-poc-jvm-config
- name: JVM_MAXIMUM_MEMORY
valueFrom:
configMapKeyRef:
key: max_heap
name: jira-poc-jvm-config
- name: JVM_RESERVED_CODE_CACHE_SIZE
valueFrom:
configMapKeyRef:
key: reserved_code_cache
name: jira-poc-jvm-config
ports:
- name: http
containerPort: 8080
protocol: TCP
- name: ehcache
containerPort: 40001
protocol: TCP
- name: ehcacheobject
containerPort: 40011
protocol: TCP
securityContext:
runAsUser: 2001
resources:
limits:
cpu: 500m
memory: 2Gi
requests:
cpu: 250m
memory: 812Mi
volumeMounts:
- name: local-home
mountPath: "/var/atlassian/application-data/jira"
- name: local-home
mountPath: "/opt/atlassian/jira/logs"
subPath: "log"
- name: shared-home
mountPath: "/var/atlassian/application-data/shared-home"
- name: server-xml
mountPath: /opt/atlassian/jira/conf/server.xml
subPath: server.xml
- name: temp
mountPath: /opt/atlassian/jira/temp
- name: shared-home
mountPath: "/opt/atlassian/jira/lib/mysql-connector-j-8.3.0.jar"
subPath: "p1-plugins/mysql-connector-j-8.3.0.jar"
lifecycle:
preStop:
exec:
command: ["sh", "-c", "/shutdown-wait.sh"]
volumes:
- name: shared-home
nfs:
path: /export/jira-poc
server: jira-fss-mt-68-poc.test.iad1.test.com
- name: server-xml
configMap:
name: jira-poc-server-config
items:
- key: server.xml
path: server.xml
- name: temp
emptyDir: {}
volumeClaimTemplates:
@vraghun looks ok to me. It's not clear what's happening without full logs from /var/atlassian/application-data/jira/log
UPDATE. You seem to define securityContext twice - on pod and container level. Drop the container securityContext.
@bianchi2 , Actually the problem was with the DB . JIRA expects the DB to be empty. Re-creating the DB solved my issue. But I am curious what if my pod gone , will I loose the dbconfig.yaml file again because the db details I have given in the UI not through values.yaml . and also how to restore my old jira data to new DB .. because the JIRA expecting new DB
dbconfig.xml is generated if it doesn't exist, and it's saved in a volume.
The main jira/tomcat pod is by default running as the root user or id 0. This is a bad security practice.