Closed mvkrishna86 closed 10 months ago
Can you help showing the YAMLs for these following resources
ServiceAccount
ClusterRole
ClusterRoleBinding
Deployment
My hunch is that there's a permission misconfiguration 😁
AFAIK, those configs are fine. @hainenber . Below are the files.
Daemonset:
apiVersion: apps/v1
kind: DaemonSet
metadata:
annotations:
deprecated.daemonset.template.generation: "9"
reloader.stakater.com/auto: "true"
creationTimestamp: "2024-01-16T13:53:38Z"
generation: 9
labels:
app: grafana-agent
app.kubernetes.io/instance: grafana-agent
name: grafana-agent
namespace: observability
resourceVersion: "32677495"
uid: a8f5f958-18a0-4878-a60f-c31f8b3d4a88
spec:
minReadySeconds: 10
revisionHistoryLimit: 10
selector:
matchLabels:
app: grafana-agent
name: grafana-agent
template:
metadata:
annotations:
reloader.stakater.com/last-reloaded-from: '{"type":"CONFIGMAP","name":"grafana-agent-daemonset-configmap","namespace":"observability","hash":"83c124ffb41ec32d16e80499d4672270fb9dffd0","containerRefs":["grafana-agent"],"observedAt":1705495986}'
creationTimestamp: null
labels:
app: grafana-agent
name: grafana-agent
spec:
containers:
- args:
- -config.file=/etc/agent/daemonset-config.yaml
command:
- /bin/grafana-agent
env:
- name: HOSTNAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: HTTP_PROXY
value: http://XXXX:9999/
- name: HTTPS_PROXY
value: http://XXXX:9999/
image: public.ecr.aws/XXXX/agent:v0.36.1
imagePullPolicy: IfNotPresent
name: grafana-agent
ports:
- containerPort: 8080
hostPort: 8080
name: http-metrics
protocol: TCP
- containerPort: 6831
hostPort: 6831
name: thrift-compact
protocol: UDP
- containerPort: 6832
hostPort: 6832
name: thrift-binary
protocol: UDP
- containerPort: 14268
hostPort: 14268
name: thrift-http
protocol: TCP
- containerPort: 14250
hostPort: 14250
name: thrift-grpc
protocol: TCP
- containerPort: 9411
hostPort: 9411
name: zipkin
protocol: TCP
- containerPort: 55680
hostPort: 55680
name: otlp
protocol: TCP
resources: {}
securityContext:
privileged: true
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/agent
name: grafana-agent-daemonset-configmap
- mountPath: /var/log
name: varlog
- mountPath: /var/lib/docker/containers
name: varlibdockercontainers
readOnly: true
- mountPath: /etc/machine-id
name: etcmachineid
readOnly: true
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: grafana-agent
serviceAccountName: grafana-agent
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoSchedule
operator: Exists
volumes:
- configMap:
defaultMode: 420
name: grafana-agent-daemonset-configmap
name: grafana-agent-daemonset-configmap
- hostPath:
path: /var/log
type: ""
name: varlog
- hostPath:
path: /var/lib/docker/containers
type: ""
name: varlibdockercontainers
- hostPath:
path: /etc/machine-id
type: ""
name: etcmachineid
updateStrategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
ServiceAccount:
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2023-12-07T14:36:36Z"
labels:
app.kubernetes.io/instance: grafana-agent
name: grafana-agent
namespace: observability
ClusterRole:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/instance: grafana-agent
name: grafana-agent
resourceVersion: "1947912"
uid: 8c4ac0f6-f15a-4725-b41b-1b7c1e818760
rules:
- apiGroups:
- ""
resources:
- nodes
- nodes/proxy
- services
- endpoints
- pods
verbs:
- get
- list
- watch
- nonResourceURLs:
- /metrics
verbs:
- get
ClusterRoleBinding:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/instance: grafana-agent
name: grafana-agent
resourceVersion: "32105710"
uid: 4960e7ab-cbb5-455c-813e-76d824595b61
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: grafana-agent
subjects:
- kind: ServiceAccount
name: grafana-agent
namespace: observability
Got the issue, I have setup HTTP_PROXY for a different reason. Now its blocking this kubernetes.default.svc calls. I have added NO_PROXY and the problem is solved.
What's wrong?
kubernetes_sd_configs is not working, not able to get the pod logs with the usage of kubernetes_sd_configs.
Steps to reproduce
Below is the config:
root@brane-uat-c22742-worker-4:/# curl localhost:12345/agent/api/v1/logs/instances {"status":"success","data":["agent"]}
root@brane-uat-c22742-worker-4:/# curl localhost:12345/agent/api/v1/logs/targets {"status":"success","data":[]}
System information
No response
Software version
Grafana Agent v0.36.1
Configuration
No response
Logs
No response