Open edtshuma opened 2 months ago
Did you install the integration on your Grafana Cloud Connections menu, as described here?
Did you install the integration on your Grafana Cloud Connections menu, as described here?
I am not running on Grafana Cloud . My installation is on AWS EKS.
How and where do I run this command from ? My Alloy is installed as a set of Helm releases (via Flux/kustomize) with the following resources:
Cluster Helm Release config-cluster.yaml release-cluster.yaml
Node Helm Release config-node.yaml release-node.yaml
and finally a kustomization.yaml file:
apiVersion: kustomize.config.k8s.io/v1beta1
configMapGenerator:
- behavior: create
files:
- config-cluster.alloy
- config-node.alloy
name: alloy-config
namespace: monitoring
options:
disableNameSuffixHash: true
kind: Kustomization
resources:
- release-cluster.yaml
- release-node.yaml`
And Alloy is deployed as a Statefulset:
```apiVersion: apps/v1
kind: StatefulSet
metadata:
annotations:
meta.helm.sh/release-name: alloy-cluster
meta.helm.sh/release-namespace: monitoring
labels:
app.kubernetes.io/instance: alloy-cluster
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: alloy-cluster
app.kubernetes.io/part-of: alloy
app.kubernetes.io/version: v1.3.0
helm.sh/chart: alloy-0.6.0
helm.toolkit.fluxcd.io/name: alloy-cluster
helm.toolkit.fluxcd.io/namespace: monitoring
name: alloy-cluster
namespace: monitoring
spec:
persistentVolumeClaimRetentionPolicy:
whenDeleted: Delete
whenScaled: Delete
podManagementPolicy: Parallel
replicas: 2
selector:
matchLabels:
app.kubernetes.io/instance: alloy-cluster
app.kubernetes.io/name: alloy-cluster
serviceName: alloy-cluster
template:
metadata:
annotations:
kubectl.kubernetes.io/default-container: alloy
labels:
app.kubernetes.io/instance: alloy-cluster
app.kubernetes.io/name: alloy-cluster
spec:
containers:
- args:
- run
- /etc/alloy/config-cluster.alloy
- --storage.path=/var/lib/alloy
- --server.http.listen-addr=0.0.0.0:12345
- --server.http.ui-path-prefix=/
- --disable-reporting
- --cluster.enabled=true
- --cluster.join-addresses=alloy-cluster-cluster
- --stability.level=generally-available
env:
- name: ALLOY_DEPLOY_MODE
value: helm
- name: HOSTNAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: 0123456789.dkr.ecr.eu-west-1.amazonaws.com/pullthrough/docker.io/grafana/alloy:v1.3.0
imagePullPolicy: IfNotPresent
name: alloy
ports:
- containerPort: 12345
name: http-metrics
protocol: TCP
- containerPort: 3100
name: http-loki
protocol: TCP
- containerPort: 4317
name: grpc-otlp
protocol: TCP
- containerPort: 4318
name: http-otlp
protocol: TCP
- containerPort: 4319
name: grpc-otlp-pub
protocol: TCP
- containerPort: 9090
name: http-prom
protocol: TCP
- containerPort: 9411
name: zipkin
protocol: TCP
- containerPort: 6831
name: thrift-compact
protocol: UDP
- containerPort: 6832
name: thrift-binary
protocol: UDP
- containerPort: 14250
name: jaeger-grpc
protocol: TCP
- containerPort: 14268
name: thrift-http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /-/ready
port: 12345
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
requests:
cpu: 500m
memory: 3Gi
volumeMounts:
- mountPath: /etc/alloy
name: config
- args:
- --volume-dir=/etc/alloy
- --webhook-url=http://localhost:12345/-/reload
image: 0123456789.dkr.ecr.eu-west-1.amazonaws.com/pullthrough/ghcr.io/jimmidyson/configmap-reload:v0.12.0
imagePullPolicy: IfNotPresent
name: config-reloader
resources:
limits:
cpu: 50m
memory: 16Mi
requests:
cpu: 1m
memory: 8Mi
volumeMounts:
- mountPath: /etc/alloy
name: config
dnsPolicy: ClusterFirst
priorityClassName: system-cluster-critical
restartPolicy: Always
serviceAccount: alloy-cluster
serviceAccountName: alloy-cluster
terminationGracePeriodSeconds: 30
topologySpreadConstraints:
- labelSelector:
matchLabels:
app.kubernetes.io/instance: alloy-cluster
app.kubernetes.io/name: alloy-cluster
maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
- labelSelector:
matchLabels:
app.kubernetes.io/instance: alloy-cluster
app.kubernetes.io/name: alloy-cluster
maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
volumes:
- configMap:
defaultMode: 420
name: alloy-config
name: config
- emptyDir: {}
name: alloy-data
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: alloy-data
spec:
accessModes:
- ReadWriteOncePod
resources:
requests:
storage: 10Gi
storageClassName: ebs-sc
volumeMode: Filesystem
This issue has not had any activity in the past 30 days, so the needs-attention
label has been added to it.
If the opened issue is a bug, check to see if a newer release fixed your issue. If it is no longer relevant, please feel free to close this issue.
The needs-attention
label signals to maintainers that something has fallen through the cracks. No action is needed by you; your issue will be kept open and you do not have to respond to this comment. The label will be removed the next time this job runs if there is new activity.
Thank you for your contributions!
What's wrong?
I have enabled Grafana Alloy Health Integrations. As per the documentation if this is enabled the deployment will also contain some default alerts. I cannot see any of these alerts in my environment.
Steps to reproduce
Install Grafana Alloy Helm Chart with the following version:
`apiVersion: helm.toolkit.fluxcd.io/v2beta2 kind: HelmRelease metadata: name: alloy-cluster namespace: monitoring spec: chart: spec: chart: 3rdparty/grafana/alloy sourceRef: kind: HelmRepository name: orion namespace: flux-system version: '0.6.0' dependsOn:
Check for AlertRules using kubectl or in Grafana (Mimir) UI under Alert rules.
System information
Server Version: v1.29.6-eks-db838b0
Software version
Grafana Alloy v1.3.0
Configuration
Logs
No response