Open jseiser opened 4 months ago
This issue has not had any activity in the past 30 days, so the needs-attention
label has been added to it.
If the opened issue is a bug, check to see if a newer release fixed your issue. If it is no longer relevant, please feel free to close this issue.
The needs-attention
label signals to maintainers that something has fallen through the cracks. No action is needed by you; your issue will be kept open and you do not have to respond to this comment. The label will be removed the next time this job runs if there is new activity.
Thank you for your contributions!
@jseiser How are you populating your substitution values i.e - faro-${cluster_number}-${environment}.${base_domain} . Is this possible in Alloy .config files?
@tshuma1
Terraform is doing it, so the file are interpolated by the time the helm command is ran.
Is there any other information I can provide here? We have tried as a deployment, as a daemonset. With and without Alloy being in the service mesh. We have even hardcoded the OTEL Attributes and removes the k8s attributes, but you still end up with the traces from linkerd not being matched.
We have not been able to find a working example of AWS EKS + Grafana Alloy. The issue also appears to extend to the actual OTLP Collector itself,
This is still an issue on the latest stable release.
What's wrong?
When enabling
k8sattributes
on grafana alloy running in EKS, you end up getting information fromAlloy
, not from the originating pod.So you end up with worthless attributes. Not the log at the end, is from an
nginx ingress
pod, in the name spacenginx-ingress-internal
, but all the attributes are for a grafana alloy pod.You can see the
ip
for the pod is correct in the trace below, but nothing else./e.g.
Steps to reproduce
System information
No response
Software version
v1.2.1
Configuration
Logs