Open pavelthq opened 1 year ago
It seems currently this approach only works on collecting "events", like: message Back-off restarting failed container
without actual pod logs output that explains an error, any thoughts how this can be achieved?
yaml:
--- # Source: sentry-kubernetes/templates/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: labels: app: sentry-kubernetes heritage: Helm release: sentry-kubernetes chart: sentry-kubernetes-0.3.2 name: sentry-kubernetes --- # Source: sentry-kubernetes/templates/secret.yaml apiVersion: v1 kind: Secret metadata: labels: app: sentry-kubernetes heritage: Helm release: sentry-kubernetes chart: sentry-kubernetes-0.3.2 name: sentry-kubernetes type: Opaque data: sentry.dsn: "..." --- # Source: sentry-kubernetes/templates/clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: app: sentry-kubernetes heritage: Helm release: sentry-kubernetes chart: sentry-kubernetes-0.3.2 name: sentry-kubernetes rules: - apiGroups: - "" resources: - events verbs: - get - list - watch --- # Source: sentry-kubernetes/templates/clusterrolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: app: sentry-kubernetes heritage: Helm release: sentry-kubernetes chart: sentry-kubernetes-0.3.2 name: sentry-kubernetes roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: sentry-kubernetes subjects: - kind: ServiceAccount name: sentry-kubernetes namespace: default --- # Source: sentry-kubernetes/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: labels: app: sentry-kubernetes heritage: Helm release: sentry-kubernetes chart: sentry-kubernetes-0.3.2 name: sentry-kubernetes spec: replicas: selector: matchLabels: app: sentry-kubernetes template: metadata: annotations: checksum/secrets: ... labels: app: sentry-kubernetes release: sentry-kubernetes spec: containers: - name: sentry-kubernetes image: "getsentry/sentry-kubernetes:latest" imagePullPolicy: Always env: - name: DSN valueFrom: secretKeyRef: name: sentry-kubernetes key: sentry.dsn resources: {} serviceAccountName: sentry-kubernetes
received by: helm template sentry-kubernetes sentry/sentry-kubernetes --set sentry.dsn=https://... > exported-sentry-kubernetes.yaml
It seems currently this approach only works on collecting "events", like: message Back-off restarting failed container
without actual pod logs output that explains an error, any thoughts how this can be achieved?
yaml:
received by: helm template sentry-kubernetes sentry/sentry-kubernetes --set sentry.dsn=https://... > exported-sentry-kubernetes.yaml