Hey @aseemand, from where Splunk log of k8s.log is coming from? And what does it indicate?
It is expected behaviour according to the metadat_filter plugin.
In one of the issue, they stated:
Pod info is scraped from the container log file name which provides the name of namespace but not its unique id. This means in a scenario where NS is created, deleted, recreated from the same name and logs still exist on disk, you could potentially be trying to associate meta from the original version of the NS even though its uuid is different then the latest.
The plugin has some logic to try and cache metadata based on ns, podname, uids, and creation dates. If using this combo of info it is unable to "match" a log entry to a known container, it will fall back to orphan. If you are creating a number of short lived pods you may be pushing the meta out of the LRU cache.
Source: https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter/issues/234#issuecomment-634931613
Hey @aseemand, from where Splunk log of k8s.log is coming from? And what does it indicate?
It is expected behaviour according to the metadat_filter plugin.
In one of the issue, they stated: Pod info is scraped from the container log file name which provides the name of namespace but not its unique id. This means in a scenario where NS is created, deleted, recreated from the same name and logs still exist on disk, you could potentially be trying to associate meta from the original version of the NS even though its uuid is different then the latest. The plugin has some logic to try and cache metadata based on ns, podname, uids, and creation dates. If using this combo of info it is unable to "match" a log entry to a known container, it will fall back to orphan. If you are creating a number of short lived pods you may be pushing the meta out of the LRU cache. Source: https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter/issues/234#issuecomment-634931613