Closed pawelrys closed 3 years ago
Hi @pawelrys, if these logs are only available ~/var/app/log
in your application container, then I would recommend to add the script_exporter as sidecar and share the volume with the log files between the two containers, like it is shown here: https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/.
If these logs are also written to stdout / stderr then you can also run the script_exporter as privileged pod and run your scripts on the host, because these logs should be available at /var/logs/containers
on the node where the application pod is running. I think the DaemonSet can then look similar to the following one:
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: script-exporter
spec:
selector:
matchLabels:
app: script-exporter
template:
metadata:
labels:
app: script-exporter
spec:
containers:
- image: ricoberger/script_exporter
name: script-exporter
volumeMounts:
- mountPath: /var/log
name: varlog
- name: config
mountPath: /etc/script_exporter
volumes:
- hostPath:
path: /var/log
name: varlog
- name: config
configMap:
name: script-exporter
defaultMode: 0777
It just contains the basic fields to get an idea of it. You may also have to add some other fields from the example Deployment: https://github.com/ricoberger/script_exporter/blob/master/examples/kubernetes.yaml.
I would close this issue. If you still have some problems with running the exporter on Kubernetes please let me know.
Hi, I have a problem with understanding how your script should work on Kubernetes. It could be caused by my small knowledge about the Kubernetes environment, but I hope you could something explain to me. Using it locally isn't a problem, but on Kubernetes, it is.
Suppose I have a cluster named my-cluster and in there a few samples pods which exhibit hello world page. My job is to get data about specific files from the containers on which programs are running, for example, on path ~/var/app/log I have two files: log_1.log and log_2.log (in every container). I would like to calculate how many days is between creating/updating log_1.log and log_2.log, export it to Prometheus and creating a diagram in Grafana about this information in every container.
Should I install the script exporter on every container and exhibit the information about differences in files, or can I create the script exporter as the next pod in my cluster and get access to the filesystem in every container to get the required data? If can I do it in a second way, could you explain how it should look?
Thank you very much in advance for your time.
Paweł