Open jnm27 opened 1 month ago
The operator should use the value of the RELATED_IMAGE_GRAFANA
environment variable. If installing from operator hub or the OpenShift operator catalog, this should be filled with the hash value. How did you install the operator?
From the operator catalog GUI
Can you share the YAML of the deployment? Would like to see which values the env vars have
Here are the operator and grafana instance deployments after selecting to install the operator and then a grafana instance in the GUI with all defaults:
oc get deployment -n grafana-operator -o yaml
apiVersion: v1
items:
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2024-10-08T14:10:35Z"
generation: 1
labels:
app.kubernetes.io/managed-by: grafana-operator
name: grafana-a-deployment
namespace: grafana-operator
ownerReferences:
- apiVersion: grafana.integreatly.org/v1beta1
blockOwnerDeletion: true
controller: true
kind: Grafana
name: grafana-a
uid: 8ddac148-bdce-44a7-947b-d5be558233b3
resourceVersion: "9355951"
uid: a72b7046-877a-4f73-a9f0-bd190ca7829a
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: grafana-a
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: grafana-a
name: grafana-a-deployment
spec:
containers:
- args:
- -config=/etc/grafana/grafana.ini
env:
- name: PLUGINS_HASH
valueFrom:
configMapKeyRef:
key: PLUGINS_HASH
name: grafana-a-plugins
optional: true
- name: CONFIG_HASH
value: de45f88b5d91e5a68192330ec849d230e6951be0377a3514630a524b1e5ac0e6
- name: GF_INSTALL_PLUGINS
- name: TMPDIR
value: /var/lib/grafana
- name: GF_SECURITY_ADMIN_USER
valueFrom:
secretKeyRef:
key: GF_SECURITY_ADMIN_USER
name: grafana-a-admin-credentials
- name: GF_SECURITY_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: GF_SECURITY_ADMIN_PASSWORD
name: grafana-a-admin-credentials
image: docker.io/grafana/grafana:10.4.3
imagePullPolicy: IfNotPresent
name: grafana
ports:
- containerPort: 3000
name: grafana-http
protocol: TCP
readinessProbe:
failureThreshold: 1
httpGet:
path: /api/health
port: 3000
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 3
resources:
limits:
memory: 1Gi
requests:
cpu: 100m
memory: 256Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/grafana/
name: grafana-a-ini
- mountPath: /var/lib/grafana
name: grafana-data
- mountPath: /var/log/grafana
name: grafana-logs
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
seccompProfile:
type: RuntimeDefault
serviceAccount: grafana-a-sa
serviceAccountName: grafana-a-sa
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: grafana-a-ini
name: grafana-a-ini
- emptyDir: {}
name: grafana-logs
- emptyDir: {}
name: grafana-data
status:
conditions:
- lastTransitionTime: "2024-10-08T14:10:35Z"
lastUpdateTime: "2024-10-08T14:10:35Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: "2024-10-08T14:20:36Z"
lastUpdateTime: "2024-10-08T14:20:36Z"
message: ReplicaSet "grafana-a-deployment-79b6bc5475" has timed out progressing.
reason: ProgressDeadlineExceeded
status: "False"
type: Progressing
observedGeneration: 1
replicas: 1
unavailableReplicas: 1
updatedReplicas: 1
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2024-10-08T14:09:49Z"
generation: 1
labels:
app.kubernetes.io/managed-by: olm
app.kubernetes.io/name: grafana-operator
olm.deployment-spec-hash: 9kaN3nc3RgxHOBYQC1j07z1DKLWJljR5IvaUd3
olm.managed: "true"
olm.owner: grafana-operator.v5.13.0
olm.owner.kind: ClusterServiceVersion
olm.owner.namespace: grafana-operator
operators.coreos.com/grafana-operator.grafana-operator: ""
name: grafana-operator-controller-manager-v5
namespace: grafana-operator
ownerReferences:
- apiVersion: operators.coreos.com/v1alpha1
blockOwnerDeletion: false
controller: false
kind: ClusterServiceVersion
name: grafana-operator.v5.13.0
uid: cd7bbf9f-017a-4fa1-98a0-f792050e4a48
resourceVersion: "9349991"
uid: 90590841-3dfa-4633-9705-b1d7b3d1d5ed
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 1
selector:
matchLabels:
app.kubernetes.io/managed-by: olm
app.kubernetes.io/name: grafana-operator
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
alm-examples: |-
[
{
"apiVersion": "grafana.integreatly.org/v1beta1",
"kind": "Grafana",
"metadata": {
"labels": {
"dashboards": "grafana-a",
"folders": "grafana-a"
},
"name": "grafana-a"
},
"spec": {
"config": {
"auth": {
"disable_login_form": "false"
},
"log": {
"mode": "console"
},
"security": {
"admin_password": "start",
"admin_user": "root"
}
}
}
},
{
"apiVersion": "grafana.integreatly.org/v1beta1",
"kind": "GrafanaAlertRuleGroup",
"metadata": {
"name": "grafanaalertrulegroup-sample"
},
"spec": {
"folderRef": "test-folder-from-operator",
"instanceSelector": {
"matchLabels": {
"dashboards": "grafana"
}
},
"interval": "5m",
"rules": [
{
"condition": "B",
"data": [
{
"datasourceUid": "grafanacloud-demoinfra-prom",
"model": {
"datasource": {
"type": "prometheus",
"uid": "grafanacloud-demoinfra-prom"
},
"editorMode": "code",
"expr": "weather_temp_c{}",
"instant": true,
"intervalMs": 1000,
"legendFormat": "__auto",
"maxDataPoints": 43200,
"range": false,
"refId": "A"
},
"refId": "A",
"relativeTimeRange": {
"from": 600
}
},
{
"datasourceUid": "__expr__",
"model": {
"conditions": [
{
"evaluator": {
"params": [
0
],
"type": "gt"
},
"operator": {
"type": "and"
},
"query": {
"params": [
"C"
]
},
"reducer": {
"params": [],
"type": "last"
},
"type": "query"
}
],
"datasource": {
"type": "__expr__",
"uid": "__expr__"
},
"expression": "A",
"intervalMs": 1000,
"maxDataPoints": 43200,
"refId": "B",
"type": "threshold"
},
"refId": "B",
"relativeTimeRange": {
"from": 600
}
}
],
"execErrState": "Error",
"for": "5m0s",
"noDataState": "NoData",
"title": "Temperature below freezing",
"uid": "4843de5c-4f8a-4af0-9509-23526a04faf8"
}
]
}
},
{
"apiVersion": "grafana.integreatly.org/v1beta1",
"kind": "GrafanaContactPoint",
"metadata": {
"labels": {
"app.kubernetes.io/created-by": "grafana-operator",
"app.kubernetes.io/instance": "grafanacontactpoint-sample",
"app.kubernetes.io/managed-by": "kustomize",
"app.kubernetes.io/name": "grafanacontactpoint",
"app.kubernetes.io/part-of": "grafana-operator"
},
"name": "grafanacontactpoint-sample"
},
"spec": {
"instanceSelector": {
"matchLabels": {
"dashboards": "grafana-a"
}
},
"name": "grafanacontactpoint-sample",
"settings": {
"email": null
},
"type": "email"
}
},
{
"apiVersion": "grafana.integreatly.org/v1beta1",
"kind": "GrafanaDashboard",
"metadata": {
"name": "grafanadashboard-sample"
},
"spec": {
"instanceSelector": {
"matchLabels": {
"dashboards": "grafana-a"
}
},
"json": "{\n\n \"id\": null,\n \"title\": \"Simple Dashboard\",\n \"tags\": [],\n \"style\": \"dark\",\n \"timezone\": \"browser\",\n \"editable\": true,\n \"hideControls\": false,\n \"graphTooltip\": 1,\n \"panels\": [],\n \"time\": {\n \"from\": \"now-6h\",\n \"to\": \"now\"\n },\n \"timepicker\": {\n \"time_options\": [],\n \"refresh_intervals\": []\n },\n \"templating\": {\n \"list\": []\n },\n \"annotations\": {\n \"list\": []\n },\n \"refresh\": \"5s\",\n \"schemaVersion\": 17,\n \"version\": 0,\n \"links\": []\n}\n"
}
},
{
"apiVersion": "grafana.integreatly.org/v1beta1",
"kind": "GrafanaDatasource",
"metadata": {
"name": "grafanadatasource-sample"
},
"spec": {
"datasource": {
"access": "proxy",
"isDefault": true,
"jsonData": {
"timeInterval": "5s",
"tlsSkipVerify": true
},
"name": "prometheus",
"type": "prometheus",
"url": "http://prometheus-service:9090"
},
"instanceSelector": {
"matchLabels": {
"dashboards": "grafana-a"
}
},
"plugins": [
{
"name": "grafana-clock-panel",
"version": "1.3.0"
}
]
}
},
{
"apiVersion": "grafana.integreatly.org/v1beta1",
"kind": "GrafanaFolder",
"metadata": {
"name": "grafanafolder-sample"
},
"spec": {
"instanceSelector": {
"matchLabels": {
"dashboards": "grafana-a"
}
},
"title": "Example Folder"
}
}
]
capabilities: Basic Install
categories: Monitoring
containerImage: ghcr.io/grafana/grafana-operator@sha256:97561cef949b58f55ec67d133c02ac205e2ec3fb77388aeb868dacfcebad0752
createdAt: "2024-09-11T09:16:51Z"
description: Deploys and manages Grafana instances, dashboards and data
sources
olm.operatorGroup: grafana-operator-og
olm.operatorNamespace: grafana-operator
olm.targetNamespaces: grafana-operator
operatorframework.io/properties: '{"properties":[{"type":"olm.gvk","value":{"group":"grafana.integreatly.org","kind":"Grafana","version":"v1beta1"}},{"type":"olm.gvk","value":{"group":"grafana.integreatly.org","kind":"GrafanaAlertRuleGroup","version":"v1beta1"}},{"type":"olm.gvk","value":{"group":"grafana.integreatly.org","kind":"GrafanaContactPoint","version":"v1beta1"}},{"type":"olm.gvk","value":{"group":"grafana.integreatly.org","kind":"GrafanaDashboard","version":"v1beta1"}},{"type":"olm.gvk","value":{"group":"grafana.integreatly.org","kind":"GrafanaDatasource","version":"v1beta1"}},{"type":"olm.gvk","value":{"group":"grafana.integreatly.org","kind":"GrafanaFolder","version":"v1beta1"}},{"type":"olm.gvk","value":{"group":"grafana.integreatly.org","kind":"GrafanaNotificationPolicy","version":"v1beta1"}},{"type":"olm.package","value":{"packageName":"grafana-operator","version":"5.13.0"}}]}'
operators.operatorframework.io/builder: operator-sdk-v1.32.0
operators.operatorframework.io/project_layout: go.kubebuilder.io/v3
repository: https://github.com/grafana/grafana-operator
support: Grafana Labs
creationTimestamp: null
labels:
app.kubernetes.io/managed-by: olm
app.kubernetes.io/name: grafana-operator
spec:
containers:
- args:
- --health-probe-bind-address=:8081
- --metrics-bind-address=0.0.0.0:9090
- --leader-elect
env:
- name: RELATED_IMAGE_GRAFANA
value: docker.io/grafana/grafana@sha256:b7fcb534f7b3512801bb3f4e658238846435804deb479d105b5cdc680847c272
- name: WATCH_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.annotations['olm.targetNamespaces']
- name: OPERATOR_CONDITION_NAME
value: grafana-operator.v5.13.0
image: ghcr.io/grafana/grafana-operator@sha256:a2d35af04ec0773f62d9b75966d0a3f8b24998e126b8ad243afb0377deb8e635
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8081
scheme: HTTP
initialDelaySeconds: 15
periodSeconds: 20
successThreshold: 1
timeoutSeconds: 1
name: manager
ports:
- containerPort: 9090
name: metrics
protocol: TCP
- containerPort: 8888
name: pprof
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: 8081
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: 200m
memory: 550Mi
requests:
cpu: 100m
memory: 20Mi
securityContext:
allowPrivilegeEscalation: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsNonRoot: true
serviceAccount: grafana-operator-controller-manager
serviceAccountName: grafana-operator-controller-manager
terminationGracePeriodSeconds: 10
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2024-10-08T14:10:00Z"
lastUpdateTime: "2024-10-08T14:10:00Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2024-10-08T14:09:49Z"
lastUpdateTime: "2024-10-08T14:10:00Z"
message: ReplicaSet "grafana-operator-controller-manager-v5-7db6978bbf" has
successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
kind: List
metadata:
resourceVersion: ""
I was able to reproduce this, and it seems to be an issue in the operator logic and not the bundling process. I've renamed the issue accordingly
RELATED_IMAGE_GRAFANA
is only used if spec.version
is empty. Otherwise, the default image (baked into the Operator) is always used. @jnm27 can you share the Grafana CR you're creating? Are you setting the spec.version field?
@pb82 we have a code path that sets the version on first install if not set - this then causes the RELATED_IMAGE
to never be used :/
We could ignore the spec.version
field if the RELATED_IMAGE_GRAFANA
references a hash instead of a tag. That should fix the issue for disconnected clusters and keep compatibility for other installation types
Excerpt from imageset-config.yml:
Yields the tag a6124978:
Instead of the expected 10.4.3.
Creating a Grafana instance in the Grafana Operator in Openshift tries to pull the grafana container by tag 10.4.3, so it can't find it.
I'm not sure if this is a problem with how it's being tagged in the registry, or if the tag doesn't matter and the problem is that the operator is trying to pull the image by tag instead of digest.