Closed Dentrax closed 1 month ago
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/reopen /remove-lifecycle rotten
@Dentrax: Reopened this issue.
@Dentrax what would be your suggestion to improve the error?
scraper.go:140] "Failed to scrape node" err="request failed, status: \"404 Not Found\"" node="ttskublhrms11"
This current error means that scraping metrics from the node failed because node="ttskublhrms11"
was not found.
Adding more context to error messages may undo some of the work done in https://github.com/kubernetes-sigs/metrics-server/pull/774. Maybe the request here is for a more verbose logging support (triggered by -v
flags, so we can retain the current error behaviour, but be verbose if the user demands it)?
/assign
Pinging @Dentrax.
Pong!
scraping metrics from the node failed because node="ttskublhrms11" was not found.
@dgrisonnet - actually node was already there with up and running state. 404 Not Found
still seem too generic to me.
Maybe the request here is for a more verbose logging support
Definitely! It'd better to provide some context to enlighten the way and would eventually be resulting with reduced troubleshooting time.
As I already dropped in the issue, questions I asked were:
From my developer perspective the existing error log already answer these questions, but maybe it is not clear enough from a user perspective hence why I wanted to know what you were expecting.
If we take the error log you reported:
Request to where?
Failed to scrape node ... node="ttskublhrms11"
=> there was a scrape request send to node ttskublhrms11
. The scrape request is made to the kubelet /metrics/resource running on node ttskublhrms11
. The fact that the request is made to kubelet can be considered an implementation detail which shouldn't be very useful to the users when debugging.
404 of what?
\"404 Not Found\"" node="ttskublhrms11"
=> this means that while making the scrape request, it returned a 404 because node ttskublhrms11
wasn't present in the cluster.
Why?
The node might've been deleted while metrics-server was trying to grab metrics from it. This doesn't sounds too harmful, it might just be that the list of nodes metrics-server held internally wasn't up-to-date yet.
/assign /triage accepted
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
/triage accepted
(org members only)/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
What would you like to be added:
Improve the following error message to get better readability:
I don't understand why I'm getting
request failed
error.We're proposing to wrap additional errors and adding some logs (request path, response details, wrapping with fmt.Errorf, etc.) would be useful:
Why is this needed:
To better error readability.
cc @eminaktas
/kind feature