Open fniko opened 8 months ago
~I have discovered a typo within my values.yml
file which caused this error. Closing, sorry.~
I though that the issue was caused by some typo, however it seems there was a deeper relation between configuration blocks. I am reopening this issue with updated description.
Also passing output from helm template
, not including into original post to make it more clear.
# Source: kube-prometheus-stack/templates/prometheus/prometheus.yaml
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: kube-prometheus-stack-prometheus
namespace: kube-prometheus-stack
labels:
app: kube-prometheus-stack-prometheus
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: kube-prometheus-stack
app.kubernetes.io/version: "56.7.0"
app.kubernetes.io/part-of: kube-prometheus-stack
chart: kube-prometheus-stack-56.7.0
release: "kube-prometheus-stack"
heritage: "Helm"
spec:
alerting:
alertmanagers:
- namespace: kube-prometheus-stack
name: kube-prometheus-stack-alertmanager
port: http-web
pathPrefix: "/"
apiVersion: v2
image: "quay.io/prometheus/prometheus:v2.49.1"
version: v2.49.1
additionalArgs:
- name: storage.tsdb.max-block-duration
value: 30s
externalUrl: http://kube-prometheus-stack-prometheus.kube-prometheus-stack:9090
paused: false
replicas: 1
shards: 1
logLevel: info
logFormat: logfmt
listenLocal: false
enableAdminAPI: false
retention: "10d"
tsdb:
outOfOrderTimeWindow: 0s
walCompression: true
routePrefix: "/"
serviceAccountName: kube-prometheus-stack-prometheus
serviceMonitorSelector:
matchLabels:
release: "kube-prometheus-stack"
serviceMonitorNamespaceSelector: {}
podMonitorSelector:
matchLabels:
release: "kube-prometheus-stack"
podMonitorNamespaceSelector: {}
probeSelector:
matchLabels:
release: "kube-prometheus-stack"
probeNamespaceSelector: {}
securityContext:
fsGroup: 2000
runAsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
ruleNamespaceSelector: {}
ruleSelector:
matchLabels:
release: "kube-prometheus-stack"
scrapeConfigSelector:
matchLabels:
release: "kube-prometheus-stack"
scrapeConfigNamespaceSelector: {}
thanos:
image: quay.io/thanos/thanos:v0.28.1
objectStorageConfig:
key: object-storage-configs.yaml
name: kube-prometheus-stack-prometheus
portName: http-web
hostNetwork: false
When trying to just apply this (for debug purposes)
kubectl apply above-config.yml
Error from server (BadRequest): error when creating "above-config.yml": Prometheus in version "v1" cannot be handled as a Prometheus: strict decoding error: unknown field "spec.scrapeConfigNamespaceSelector", unknown field "spec.scrapeConfigSelector"
EDIT: The above error was fixed by removing CRD - Uninstall Helm Chart
.
Current version from CRD
kubectl describe crd prometheuses.monitoring.coreos.com
Annotations: controller-gen.kubebuilder.io/version: v0.13.0
operator.prometheus.io/version: 0.71.2
Ok, I did more debug and after manually applying the above prometheus.yml
, the output of
kubectl describe prometheus kube-prometheus-stack-prometheus
is:
making statefulset failed: make StatefulSet spec: can't set arguments which are already managed by the operator: storage.tsdb.max-block-duration,storage.tsdb.min-block-duration
wider output (less readable though)
Message: shard 0: statefulset kube-prometheus-stack/prometheus-kube-prometheus-stack-prometheus not found
Observed Generation: 1
Reason: StatefulSetNotFound
Status: False
Type: Available
Last Transition Time: 2024-02-19T01:49:52Z
Message: making statefulset failed: make StatefulSet spec: can't set arguments which are already managed by the operator: storage.tsdb.max-block-duration
Observed Generation: 1
Reason: ReconciliationFailed
Status: False
Type: Reconciled
How this should be handled?
The tsdb block duration arguments can be set through additionalArgs
only if disableCompaction
is not set (default is false), i.e. if compaction is enabled. If set to true, the operator does not allow overriding the arguments.
Furthermore, if spec.thanos
is set in prometheus CR with objectStorageConfig
defined, i.e. uploads are active, the operator disables compaction by setting the two block duration arguments equal. In these conditions, you may wish to have a look at blockSize
in thanosSpec. The field is not present in the values' prometheus.prometheusSpec.thanos
but will be taken over once inserted.
Oh, OK. Thank you for your help. So I think I will be closing this issue because it's not an issue rather than a configuration mismatch. or do you think it makes sense to improve some docs or any other aspect of the helm chart? If not, I will close the issue immediately.
Configuration that works, for others.
prometheus:
prometheusSpec:
# Configure Thanos
thanos:
image: quay.io/thanos/thanos:v0.28.1
blockSize: "30s"
objectStorageConfig:
secret:
type: S3
config:
bucket: "thanos"
endpoint: "region.provider.com"
access_key: "xxx"
secret_key: "xxx"
Describe the bug a clear and concise description of what the bug is.
Upon trying to set
storage.tsdb.min-block-duration
usingadditionalArgs
whilethanos
objectStorageConfig
configuration is present, the prometheus StatefulSet is not created.~After clean install using Helm, I am observing two strange warnings - it might relate~ (fixed by removing old CRD)
What's your helm version?
3.14.1
What's your kubectl version?
1.24.2
Which chart?
kube-prometheus-stack
What's the chart version?
56.7.0
What happened?
After using custom values in order to increase Thanos sync frequency to remote storage, the prometheus did not reflect those changes. When using as clean install, the prometheus did not show up at all. It seems that StatefulSet is not created. It seems like the issue is with
objectStorageConfig
underthanos
configuration block. When it's removed (seevalues.yml
below),prometheus
starts to behave as expected.Helm output
What you expected to happen?
How to reproduce it?
values.yml
file with provided valueshelm
deploy command as providedEnter the changed values of values.yml?
Enter the command that you execute and failing/misfunctioning.
Anything else we need to know?
This
values.yml
configuration works as expected -max-block-duration
is set and sidecar is liveFull outpus
helm ls
kubectl get pod
kubectl get deploy
kubectl get statefulset