Telefonica / prometheus-kafka-adapter

Use Kafka as a remote storage database for Prometheus (remote write only)
Apache License 2.0
364 stars 135 forks source link

Why this logs happen?? error 404 occurs when the kube-prometheus-stack calls healthz. #82

Open rlatjd1f opened 2 years ago

rlatjd1f commented 2 years ago

kube-prometheus-stack Pod Log

Mon, Nov 8 2021 2:31:10 pm | {"level":"warning","msg":"invalid serialization format, using json","serialization-format-value":"","time":"2021-11-08T05:31:10Z"} Mon, Nov 8 2021 2:31:10 pm | {"level":"info","msg":"creating kafka producer","time":"2021-11-08T05:31:10Z"} Mon, Nov 8 2021 2:31:10 pm | [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. Mon, Nov 8 2021 2:31:10 pm | - using env: export GIN_MODE=release Mon, Nov 8 2021 2:31:10 pm | - using code: gin.SetMode(gin.ReleaseMode) Mon, Nov 8 2021 2:31:10 pm |   Mon, Nov 8 2021 2:31:10 pm | [GIN-debug] GET /metrics --> github.com/gin-gonic/gin.WrapH.func1 (3 handlers) Mon, Nov 8 2021 2:31:10 pm | [GIN-debug] POST /receive --> main.receiveHandler.func1 (3 handlers) Mon, Nov 8 2021 2:31:10 pm | [GIN-debug] Environment variable PORT="8080" Mon, Nov 8 2021 2:31:10 pm | [GIN-debug] Listening and serving HTTP on :8080 Mon, Nov 8 2021 2:31:14 pm | {"fields.time":"2021-11-08T05:31:14Z","ip":"100.100.106.112","latency":5846,"level":"info","method":"GET","msg":"","path":"/healthz","status":404,"time":"2021-11-08T05:31:14Z","user-agent":"kube-probe/1.20"} Mon, Nov 8 2021 2:31:18 pm | {"fields.time":"2021-11-08T05:31:18Z","ip":"100.100.106.112","latency":5806,"level":"info","method":"GET","msg":"","path":"/healthz","status":404,"time":"2021-11-08T05:31:18Z","user-agent":"kube-probe/1.20"} Mon, Nov 8 2021 2:31:24 pm | {"fields.time":"2021-11-08T05:31:24Z","ip":"100.100.106.112","latency":5682,"level":"info","method":"GET","msg":"","path":"/healthz","status":404,"time":"2021-11-08T05:31:24Z","user-agent":"kube-probe/1.20"} Mon, Nov 8 2021 2:31:28 pm | {"fields.time":"2021-11-08T05:31:28Z","ip":"100.100.106.112","latency":6364,"level":"info","method":"GET","msg":"","path":"/healthz","status":404,"time":"2021-11-08T05:31:28Z","user-agent":"kube-probe/1.20"} Mon, Nov 8 2021 2:31:34 pm | {"fields.time":"2021-11-08T05:31:34Z","ip":"100.100.106.112","latency":7542,"level":"info","method":"GET","msg":"","path":"/healthz","status":404,"time":"2021-11-08T05:31:34Z","user-agent":"kube-probe/1.20"} Mon, Nov 8 2021 2:31:38 pm | {"fields.time":"2021-11-08T05:31:38Z","ip":"100.100.106.112","latency":5171,"level":"info","method":"GET","msg":"","path":"/healthz","status":404,"time":"2021-11-08T05:31:38Z","user-agent":"kube-probe/1.20"}

values.yaml

environment: KAFKA_BROKER_LIST: "kafka.kafka:9092" KAFKA_TOPIC: "metrics" PORT: 8080

service: type: ClusterIP port: 80 annotations: {}

100.100.106.112 is my node IP

I'm doing a test to connect the kube-prometheus-stack to kafka, and running a pod causes such an error.

In the kube-prometheus-stack template, the service type was performed in the Cluster IP method, and the rest of the settings were used as default.

sousadax12 commented 2 years ago

I were having the same error. I just build a new docker image and use it instead of using the one in dockerhub.

palmerabollo commented 2 years ago

@sousadax12 did you change anything? I wasn't aware the image at dockerhub was not ok. Can you confirm whether you just run docker build using the official Dockerfile at https://github.com/Telefonica/prometheus-kafka-adapter/blob/master/Dockerfile?

sousadax12 commented 2 years ago

I just did that, but I will paste here my version:

FROM golang:buster as build

WORKDIR /src/prometheus-kafka-adapter

COPY go.mod .
COPY go.sum .
RUN go mod download

ADD . /src/prometheus-kafka-adapter

RUN go build -o /prometheus-kafka-adapter -ldflags '-w -extldflags "-static"'
RUN go test ./...

FROM ubuntu:latest

COPY schemas/metric.avsc /schemas/metric.avsc
COPY --from=build /prometheus-kafka-adapter /

CMD /prometheus-kafka-adapter
DasDepp commented 2 years ago

Hello, I have the same issue. Is really the docker image defect or could there be any other issues?