Open sayantanvlabs opened 1 year ago
This issue doesn't have a Team:<team>
label.
This configuration worked for us in minikube
autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints:
enabled: true
default_config:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
@kaykhan Tried that not working either. Here is my filebeat resource:
Name: jaegerpoc-elastic
Namespace: elastic-system
Labels: <none>
Annotations: association.k8s.elastic.co/es-conf:
{"authSecretName":"jaegerpoc-elastic-beat-user","authSecretKey":"elastic-system-jaegerpoc-elastic-beat-user","isServiceAccount":false,"caC...
API Version: beat.k8s.elastic.co/v1beta1
Kind: Beat
Metadata:
Creation Timestamp: 2023-09-04T08:54:30Z
Generation: 2
Managed Fields:
API Version: beat.k8s.elastic.co/v1beta1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:config:
.:
f:filebeat.autodiscover:
.:
f:providers:
f:output.elasticsearch:
.:
f:enabled:
f:output.logstash:
.:
f:hosts:
f:daemonSet:
.:
f:podTemplate:
.:
f:spec:
.:
f:automountServiceAccountToken:
f:dnsPolicy:
f:hostNetwork:
f:securityContext:
.:
f:runAsUser:
f:serviceAccountName:
f:volumes:
f:elasticsearchRef:
.:
f:name:
f:type:
f:version:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2023-09-04T08:54:30Z
API Version: beat.k8s.elastic.co/v1beta1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:association.k8s.elastic.co/es-conf:
f:spec:
f:daemonSet:
f:podTemplate:
f:metadata:
.:
f:creationTimestamp:
f:spec:
f:containers:
f:updateStrategy:
f:kibanaRef:
f:monitoring:
.:
f:logs:
f:metrics:
Manager: elastic-operator
Operation: Update
Time: 2023-09-04T08:54:38Z
API Version: beat.k8s.elastic.co/v1beta1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:availableNodes:
f:elasticsearchAssociationStatus:
f:expectedNodes:
f:health:
f:observedGeneration:
f:version:
Manager: elastic-operator
Operation: Update
Subresource: status
Time: 2023-09-04T08:54:43Z
Resource Version: 108204
UID: 7beb32da-59a2-4693-97d1-b34571fbc150
Spec:
Config:
filebeat.autodiscover:
Providers:
Hints:
default_config:
Paths:
/var/log/containers/*${data.kubernetes.container.id}.log
Type: container
Enabled: true
Node: minikube
Type: kubernetes
output.elasticsearch:
Enabled: false
output.logstash:
Hosts:
jaegerpoc-elastic-ls-beats.elastic-system.svc.cluster.local:5044
Daemon Set:
Pod Template:
Metadata:
Creation Timestamp: <nil>
Spec:
Automount Service Account Token: true
Containers:
Name: filebeat
Resources:
Volume Mounts:
Mount Path: /var/log/containers
Name: varlogcontainers
Mount Path: /var/log/pods
Name: varlogpods
Mount Path: /var/lib/docker/containers
Name: varlibdockercontainers
Dns Policy: ClusterFirstWithHostNet
Host Network: true
Security Context:
Run As User: 0
Service Account Name: jaegerpoc-elastic-beat-sa
Volumes:
Host Path:
Path: /var/log/containers
Name: varlogcontainers
Host Path:
Path: /var/log/pods
Name: varlogpods
Host Path:
Path: /var/lib/docker/containers
Name: varlibdockercontainers
Update Strategy:
Elasticsearch Ref:
Name: jaegerpoc-elastic
Kibana Ref:
Monitoring:
Logs:
Metrics:
Type: filebeat
Version: 8.9.1
Status:
Available Nodes: 1
Elasticsearch Association Status: Established
Expected Nodes: 1
Health: green
Observed Generation: 2
Version: 8.9.1
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning AssociationError 6m12s (x3 over 6m12s) beat-controller Association backend for elasticsearch is not configured
Normal AssociationStatusChange 6m11s beat-es-association-controller Association status changed from [] to [Established]
I don't see any helpful logs from either filebeat or the logstash, there are no apparent error so really not sure what's happening.
Hi! We just realized that we haven't looked into this issue in a while. We're sorry!
We're labeling this issue as Stale
to make it hit our filters and make sure we get back to it as soon as possible. In the meantime, it'd be extremely helpful if you could take a look at it as well and confirm its relevance. A simple comment with a nice emoji will be enough :+1
.
Thank you for your contribution!
So I am trying to setup filebeat for kubernetes to collect logs only from pods annotated with
co.elastic.logs/enabled: "true"
as per the docs here. I can see the annotations in my pods and they are producing logs as well. But the logs are not being collected by filebeat. I am attaching my configuration please let me know if I have done something wrong.Now instead of autodiscover if I setup filebeat with inputs configuration of type container, I can see logs from all the pods.
For confirmed bugs, please report: