Closed jgagnon44 closed 3 months ago
/triage accepted /assign @dgrisonnet
@logicalhan @dgrisonnet any estimation when are you planning fixing it? i face same issue with the same version.
E0202 22:02:08.428816 1 metrics_handler.go:215] "Failed to write metrics" err="failed to write help text: write tcp 10.11.31.68:8080->10.11.20.121:41082: write: broken pipe" E0205 04:56:32.728805 1 metrics_handler.go:215] "Failed to write metrics" err="failed to write metrics family: write tcp 10.11.31.68:8080->10.11.20.121:48750: write: connection reset by peer" E0205 04:56:32.728835 1 metrics_handler.go:215] "Failed to write metrics" err="failed to write help text: write tcp 10.11.31.68:8080->10.11.20.121:48750: write: connection reset by peer" E0205 04:56:32.728857 1 metrics_handler.go:215] "Failed to write metrics" err="failed to write help text: write tcp 10.11.31.68:8080->10.11.20.121:48750: write: connection reset by peer" E0205 04:56:32.728870 1 metrics_handler.go:215] "Failed to write metrics" err="failed to write help text: write tcp 10.11.31.68:8080->10.11.20.121:48750: write: connection reset by peer" E0205 04:56:32.728888 1 metrics_handler.go:215] "Failed to write metrics" err="failed to write help text: write tcp 10.11.31.68:8080->10.11.20.121:48750: write: connection reset by peer" E0205 04:56:32.928694 1 metrics_handler.go:215] "Failed to write metrics" err="failed to write help text: write tcp 10.11.31.68:8080->10.11.20.121:48750: write: connection reset by peer" E0205 04:56:32.928740 1 metrics_handler.go:215] "Failed to write metrics" err="failed to write help text: write tcp 10.11.31.68:8080->10.11.20.121:48750: write: connection reset by peer" E0205 04:56:32.928757 1 metrics_handler.go:215] "Failed to write metrics" err="failed to write help text: write tcp 10.11.31.68:8080->10.11.20.121:48750: write: connection reset by peer" E0207 09:31:40.528764 1 metrics_handler.go:215] "Failed to write metrics" err="failed to write metrics family: write tcp 10.11.31.68:8080->10.11.20.121:34628: write: broken pipe" E0207 09:31:40.528815 1 metrics_handler.go:215] "Failed to write metrics" err="failed to write help text: write tcp 10.11.31.68:8080->10.11.20.121:34628: write: broken pipe" E0207 09:31:40.528829 1 metrics_handler.go:215] "Failed to write metrics" err="failed to write help text: write tcp 10.11.31.68:8080->10.11.20.121:34628: write: broken pipe" E0207 09:31:40.528848 1 metrics_handler.go:215] "Failed to write metrics" err="failed to write help text: write tcp 10.11.31.68:8080->10.11.20.121:34628: write: broken pipe" E0207 09:31:40.528861 1 metrics_handler.go:215] "Failed to write metrics" err="failed to write help text: write tcp 10.11.31.68:8080->10.11.20.121:34628: write: broken pipe" E0207 09:31:40.528875 1 metrics_handler.go:215] "Failed to write metrics" err="failed to write help text: write tcp 10.11.31.68:8080->10.11.20.121:34628: write: broken pipe" E0207 09:31:40.528894 1 metrics_handler.go:215] "Failed to write metrics" err="failed to write help text: write tcp 10.11.31.68:8080->10.11.20.121:34628: write: broken pipe"
I don't have any bandwidth to investigate right now.
@CatherineF-dev do you perhaps have some time to take a look at this?
Ok
@bengoldenberg09
this is my deployment: `apiVersion: apps/v1 kind: Deployment metadata: name: kube-state-metrics namespace: monitor labels: app.kubernetes.io/component: exporter app.kubernetes.io/name: kube-state-metrics spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: kube-state-metrics template: metadata: creationTimestamp: null labels: app.kubernetes.io/component: exporter app.kubernetes.io/name: kube-state-metrics annotations: karpenter.sh/do-not-evict: 'true' prometheus.io/path: /metrics prometheus.io/port: '8080' prometheus.io/probe: 'true' prometheus.io/scrape: 'true' spec: containers:
so i have the latest one. and it is happening from time to time. when i am running the deployment this was my logs.
I0207 13:17:41.754839 1 server.go:72] levelinfomsgListening onaddress[::]:8081
I0207 13:17:41.854588 1 server.go:72] levelinfomsgTLS is disabled.http2falseaddress[::]:8081
I0207 13:17:53.456710 1 trace.go:236] Trace[169429371]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.5/tools/cache/reflector.go:231 (07-Feb-2024 13:17:41.455) (total time: 12001ms):
Trace[169429371]: ---"Objects listed" error:We're also using VictoriaMetrics and see the same issue. Scraping component is "VMAgent".
image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.10.0
We are seeing this always when the cluster is heavily loaded with a bazillion of Pods running and other churn.
We use k-s-m metrics to identify nodes in a JOIN and the metrics always drop out when the churn rises. So it must be related to "amount of stuff happening" for lack of a more cogent description.
See drop-outs here:
Correlated error logs:
Facing the same issue, current KSM version: v2.8.2
Logs:
E0219 03:57:11.745859 1 metrics_handler.go:213] "Failed to write metrics" err="failed to write help text: write tcp 1xx.xxx.63.7:8080->1xx.168.34.xx:55444: write: broken pipe"
E0219 03:57:11.745869 1 metrics_handler.go:213] "Failed to write metrics" err="failed to write help text: write tcp 1xx.xxx.63.7:8080->1xx.168.34.xx:55444: write: broken pipe"
E0219 03:57:11.745878 1 metrics_handler.go:213] "Failed to write metrics" err="failed to write help text: write tcp 1xx.xxx.63.7:8080->1xx.168.34.xx:55444: write: broken pipe"
KSM version: v2.10.0
I have been observing the same when there are a lot of pods in the cluster(over 5K). This works properly when the number of pods are lower(under 500).
qq: could anyone help provide detailed steps to reproduce this issue? Thx!
cc @bengoldenberg09, could you paste the deployment yaml again? The above code formatting messed up.
cc @naweeng, do you remember how to reproduce?
This error is thrown from here https://github.com/kubernetes/kube-state-metrics/blob/v2.8.2/pkg/metricshandler/metrics_handler.go#L210-L215 with error write: broken pipe
or golang write: connection reset by peer
// w http.ResponseWriter
for _, w := range m.metricsWriters {
err := w.WriteAll(writer)
if err != nil {
klog.ErrorS(err, "Failed to write metrics")
}
}
io.Writer .Write()
I feel this might be related to go version. Could you try v2.9.0+ which uses golang 1.9? cc @naweeng
I feel this might be related to go version. Could you try v2.9.0+ which uses golang 1.9? cc @naweeng
Building with go 1.19, still the same error @CatherineF-dev
Can you check the scrape_duration_seconds for the job that scrapes it? If there's a broken pipe, the TCP/HTTP connection might get terminated early.
@bxy4543, could you help reproduce with detailed steps?
im seeing the same issue on our dev cluster (which is fairly large)
Can you check the scrape_duration_seconds for the job that scrapes it? If there's a broken pipe, the TCP/HTTP connection might get terminated early.
scrape_duration_seconds{cluster="cluster-name",container="kube-state-metrics",endpoint="http",instance="xxx:8080",job="kube-state-metrics",namespace="vm",pod="victoria-metrics-k8s-stack-kube-state-metrics-847ddd64-splbf",prometheus="vm/victoria-metrics-k8s-stack",service="victoria-metrics-k8s-stack-kube-state-metrics"} 1.602
@bxy4543, could you help reproduce with detailed steps?
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "4"
meta.helm.sh/release-name: victoria-metrics-k8s-stack
meta.helm.sh/release-namespace: vm
labels:
app.kubernetes.io/component: metrics
app.kubernetes.io/instance: victoria-metrics-k8s-stack
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/part-of: kube-state-metrics
app.kubernetes.io/version: 2.10.0
helm.sh/chart: kube-state-metrics-5.12.1
name: victoria-metrics-k8s-stack-kube-state-metrics
namespace: vm
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/instance: victoria-metrics-k8s-stack
app.kubernetes.io/name: kube-state-metrics
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/component: metrics
app.kubernetes.io/instance: victoria-metrics-k8s-stack
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/part-of: kube-state-metrics
app.kubernetes.io/version: 2.10.0
helm.sh/chart: kube-state-metrics-5.12.1
spec:
containers:
- args:
- --port=8080
- --resources=certificatesigningrequests,configmaps,cronjobs,daemonsets,deployments,endpoints,horizontalpodautoscalers,ingresses,jobs,leases,limitranges,mutatingwebhookconfigurations,namespaces,networkpolicies,nodes,persistentvolumeclaims,persistentvolumes,poddisruptionbudgets,pods,replicasets,replicationcontrollers,secrets,services,statefulsets,storageclasses,validatingwebhookconfigurations,volumeattachments
image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.10.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: kube-state-metrics
ports:
- containerPort: 8080
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /
port: 8080
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources: {}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 65534
runAsGroup: 65534
runAsNonRoot: true
runAsUser: 65534
seccompProfile:
type: RuntimeDefault
serviceAccount: victoria-metrics-k8s-stack-kube-state-metrics
serviceAccountName: victoria-metrics-k8s-stack-kube-state-metrics
terminationGracePeriodSeconds: 30
@CatherineF-dev
Can you check the scrape_duration_seconds for the job that scrapes it? If there's a broken pipe, the TCP/HTTP connection might get terminated early.
I found the cause of the problem: Since some data could still be obtained, I did not check the target status. When I checked the target status, I found this error:
cannot read stream body in 1 seconds: the response from "http://xxx:8080/metrics" exceeds -promscrape.maxScrapeSize=16777216; either reduce the response size for the target or increase -promscrape.maxScrapeSize
So I expanded the vmagent parameter promscrape.maxScrapeSize
(default is 16777216), and no more errors were reported
Thx!
cannot read stream body in 1 seconds: the response from "http://xxx:8080/metrics" exceeds -promscrape.maxScrapeSize=16777216; either reduce the response size for the target or increase -promscrape.maxScrapeSize
qq: where did you find this error log? Is it from prometheus?
qq: where did you find this error log? Is it from prometheus?
I am using victoria-metrics, which I found out from the vmagent page target.
@jgagnon44 @decipher27 @naweeng @towolf could you try the above solution So I expanded the vmagent parameter promscrape.maxScrapeSize (default is 16777216), and no more errors were reported
?
If no issues, will close it.
/close
@CatherineF-dev: Closing this issue.
What's the equivalent of vmagent parameter promscrape.maxScrapeSize
in Prometheus? If it is body_size_limit
, it's default to 0, which is no limit. I'm still having the same issue, let me know if you folks were able to implement such solution and see results.
What happened: After noticing some "anomaliles" in some metrics, did a little Googling and checked the kube-state-metrics pod log. Found numerous messages like the following:
What you expected to happen: Not really sure, but I am sure these error messages are not good.
How to reproduce it (as minimally and precisely as possible):
I am not sure how to reproduce this, as I don't know what causes it to occur. I found it in the pod logs, then decided to restart the pod. Unfortunately this blew away the old logs and it has yet to appear in the new pod logs.
Anything else we need to know?:
Environment:
I used the kube-prometheus-stack Helm chart (https://github.com/prometheus-community/helm-charts/) to install Prometheus, Grafana, etc. to our Kubernetes cluster. The Helm chart version is 52.1.0. After inspecting the Helm chart dependencies, I believe kube-state-metrics is version 5.14.0.
I did a quick inspection of the pod spec and found the following:
Kubernetes version (use
kubectl version
):Cloud provider or hardware configuration:
Our K8s cluster is an on-premises installation. The cluster nodes are running Ubuntu 20.04.