grafana / loki

Like Prometheus, but for logs.
https://grafana.com/loki
GNU Affero General Public License v3.0
23.4k stars 3.39k forks source link

Documentation on how to get started with Openshift #1165

Open cyriltovena opened 4 years ago

cyriltovena commented 4 years ago

Is your feature request related to a problem? Please describe. It seems that running promtail in Openshift is not easy. see https://github.com/grafana/loki/issues/1153

We should have better documentation for running on Openshift.

Describe the solution you'd like Step by step guide to install and configure Loki and Promtail in Openshift

cyriltovena commented 4 years ago

Another trouble when using Openshift, https://github.com/grafana/loki/issues/1166

cyriltovena commented 4 years ago

/cc @brancz if you know someone who can help us with that one. Ideally someone who has access to Openshift.

stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had any activity in the past 30 days. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

m-gnaedig commented 4 years ago

Hello together Can someone provide an Promtail ConfigMap which is working together with OpenShift 3.9? Thanks in advance

MastanaGuru commented 4 years ago

@ManCon , I am trying to install loki+fluent-bit in Openshift 3.11, but the deployment fails because [#1458], Appreciate if you can provide info how got around it

fchiorascu commented 4 years ago

@cyriltovena is there a status for documentation on how to properly install Loki + OpenShift?

cyriltovena commented 4 years ago

Unfortunately no news and I don't have access to an OpenShift cluster. I think the biggest trouble was security to read logs from host and some ulimit lower than usual.

May be Red Hat can provide a cluster to build that doc.

fchiorascu commented 4 years ago

I want to use/install in my OpenShift project as client/user, not as an administrator of OpenShift cluster. :)

slim-bean commented 4 years ago

/cc @periklis :)

periklis commented 4 years ago

@slim-bean I have a WIP staged on my machine for the docs section. I will keep you posted hopefully by next week.

slim-bean commented 4 years ago

Awesome news!

On Fri, Jun 26, 2020, 7:12 AM Periklis Tsirakidis notifications@github.com wrote:

@slim-bean https://github.com/slim-bean I have a WIP staged on my machine for the docs section. I will keep you posted hopefully by next week.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/grafana/loki/issues/1165#issuecomment-650124270, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACO2RK4KMFTY3YTASPWMDXTRYR7DDANCNFSM4JBS7EVA .

owen-d commented 4 years ago

There's definitely open interest here, we just got asked about this during the configuration webinar :)

periklis commented 4 years ago

@owen-d Still on my todo list, didn't get to so far.

RiRa12621 commented 4 years ago

OpenShift 3.9 which is referenced in the issues is EOL since last year and 3.11 will also be EOL mid next year. Probably doesn't make all that much sense to doc it out for that and should rather be 4.XX I'll try to find out if we can provide a short lived cluster to support the doc efforts

fchiorascu commented 4 years ago

No worries I've managed to do the things to work from 3 months ago. Maybe it will be a nice feature for OpenShift Container Platform 4.x to have some insights and documentation regarding some steps from install/configuration point of view.

RiRa12621 commented 4 years ago

That is great @fchiorascu :) however my comment was more directed towards @cyriltovena since I understood it as if he was attempting to write a more general documentation

dstockdreher commented 4 years ago

I can confirm that we are using OpenShift 3.11 and and trying to use Loki at a very large company. Any additional documentation for OpenShift 3.11 would be greatly useful

RiRa12621 commented 4 years ago

@cyriltovena can you just drop me an email and then we will work out a 60d (Evaluation) subscription? You'll only have to run it on your own infra then

dstockdreher commented 4 years ago

I was able to get it to work by adding the promtail service account to the privileged SCC and then adding privileged: true to the securityContext config section in the promtail ds config. Also needed to update the client urls from promtail-loki:3100 to just loki:3100

cyriltovena commented 4 years ago

@RiRa12621 Sorry I was off, going to focus on something else for now, but as soon as I have some time for this one, I'll ping you.

oleksandrsemak commented 4 years ago

Hey @cyriltovena Openshift 4 required adm policy add-scc-to-user hostmount-anyuid, adm policy add-scc-to-user privileged also Openshift keep log not in var/log/pods instead of using /var/log/containers.

Can you give advise with follow error msg="no path for target"?

Openshift has path like /var/log/containers/prometheus-federated-prometheus-0_thanos_oauth-proxy-1de7c3d81f64221a5c5da3a669b5f4cab96e1b2818c52858264892238dfc3d8a.log

where: prometheus-federated-prometheus-0 - pod_name thanos - namespace oauth-proxy - container_name 1de7c3d81f64221a5c5da3a669b5f4cab96e1b2818c52858264892238dfc3d8a - I don't find in Available meta labels

I am not sure how to adjust:

    - replacement: /var/log/pods/*$1/*.log
      separator: /
      source_labels:
      - __meta_kubernetes_pod_uid
      - __meta_kubernetes_pod_container_name
      target_label: __path__
oleksandrsemak commented 4 years ago

so I have changed scrape configs as openshift doesn't use __meta_kubernetes_pod_uid in pod name instead openshift use containerId which I don't find in available meta for kubernetes_sd_config also path to log in openshift started in /var/log/containers:

    - replacement: /var/log/pods/*$1/*.log
      separator: /
      source_labels:
      - __meta_kubernetes_pod_uid
      - __meta_kubernetes_pod_container_name
      target_label: __path__

to

    - replacement: /var/log/**/*$1*.log
      separator: /
      source_labels:
      - __meta_kubernetes_pod_name
      target_label: __path__

finally it works fine in openshift 4

periklis commented 3 years ago

FWIW, I can confirm the steps regarding security context constraints on OpenShift 4. However, you can address them more simply by:

  1. Amend the ClusterRole to get the security context constraints:
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
    name: promtail
    rules:
    - apiGroups:
    - ""
    resources:
    - nodes
    - nodes/proxy
    - services
    - endpoints
    - pods
    verbs:
    - get
    - list
    - watch
    - apiGroups:
    - security.openshift.io
    resourceNames:
    - hostmount-anyuid
    - privileged
    resources:
    - securitycontextconstraints
    verbs:
    - use
  2. Add a securityContext to promtails container:
    spec:
    template:
    spec:
      containers:
        image: grafana/promtail:1.6.1
        name: promtail
        securityContext:
          privileged: true
ST-DDT commented 3 years ago

I successfully deployed the loki stack on OKD 3.11 using these changes to the generated helm chart (--debug --dry-run):

Multiple times

         source_labels:
         - __meta_kubernetes_pod_container_name
         target_label: container
-      - replacement: /var/log/pods/*$1/*.log
+      - replacement: /var/log/containers/*$1*.log
         separator: /
         source_labels:
-        - __meta_kubernetes_pod_uid
-        - __meta_kubernetes_pod_container_name
+        - __meta_kubernetes_pod_name
         target_label: __path__
 kind: ClusterRole
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   labels:
     app: promtail
     chart: promtail-0.24.0
     release: loki
     heritage: Helm
   name: loki-promtail-clusterrole
 rules:
 - apiGroups: [""] # "" indicates the core API group
   resources:
   - nodes
   - nodes/proxy
   - services
   - endpoints
   - pods
   verbs: ["get", "watch", "list"]
+- apiGroups:
+  - security.openshift.io
+  resourceNames:
+  - hostmount-anyuid
+  - privileged
+  resources:
+  - securitycontextconstraints
+  verbs:
+  - use
   template:
     metadata:
       labels:
         app: promtail
         release: loki
       annotations:
         checksum/config: 896f97a2476f94ca48f58e2b9ed199f7640947714b67c343b56a87c82ed1894c
         prometheus.io/port: http-metrics
         prometheus.io/scrape: "true"
     spec:
       serviceAccountName: loki-promtail
       containers:
         - name: promtail
           image: "grafana/promtail:1.6.0"
           imagePullPolicy: IfNotPresent
           args:
             - "-config.file=/etc/promtail/promtail.yaml"
             - "-client.url=http://loki:3100/loki/api/v1/push"
           volumeMounts:
             - name: config
               mountPath: /etc/promtail
             - name: run
               mountPath: /run/promtail
             - mountPath: /var/lib/docker/containers
               name: docker
               readOnly: true
+            - mountPath: /var/log/containers
+              name: containers
+              readOnly: true
             - mountPath: /var/log/pods
               name: pods
               readOnly: true
           env:
             - name: HOSTNAME
               valueFrom:
                 fieldRef:
                   fieldPath: spec.nodeName
           ports:
             - containerPort: 3101
               name: http-metrics
           securityContext:
+            privileged: true
             readOnlyRootFilesystem: true
             runAsGroup: 0
             runAsUser: 0
           readinessProbe:
             failureThreshold: 5
             httpGet:
               path: /ready
               port: http-metrics
             initialDelaySeconds: 10
             periodSeconds: 10
             successThreshold: 1
             timeoutSeconds: 1
       tolerations:
         - effect: NoSchedule
           key: node-role.kubernetes.io/master
           operator: Exists
       volumes:
         - name: config
           configMap:
             name: loki-promtail
         - name: run
           hostPath:
             path: /run/promtail
         - hostPath:
             path: /var/lib/docker/containers
           name: docker
+        - hostPath:
+            path: /var/log/containers
+          name: containers
         - hostPath:
             path: /var/log/pods
           name: pods
   template:
     metadata:
       labels:
         app: loki
         name: loki
         release: loki
       annotations:
         checksum/config: d4d01b8aefcd522bbc8d4d0dd4a033c2a259493c77719d2b6facfdb63a94c120
         prometheus.io/port: http-metrics
         prometheus.io/scrape: "true"
     spec:
       serviceAccountName: loki
       securityContext:
         fsGroup: 1000160000
-        runAsGroup: 10001
+        runAsGroup: 1000160000
         runAsNonRoot: true
-        runAsUser: 10001
+        runAsUser: 1000160000
       initContainers:
         []
       containers:
         - name: loki
           image: "grafana/loki:1.6.0"
MastanaGuru commented 3 years ago

Thanks @ST-DDT, it worked like a charm. Made a minor change, instead of "/var/log/containers/*$1*.log" used "/var/log/containers/$1*.log"

In loki-template, change below to match your enviroment. fsGroup: 1000280000 runAsGroup: 1000280000 runAsUser: 1000280000

Once deployed kubectl port-forward \<loki-promtail-podname> 3101

In browser http://localhost:3101/targets

loki-cm.yaml

# Source: loki-stack/charts/loki/templates/secret.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: loki
  namespace: bcnc-loki
  labels:
    app: loki
data:
  loki.yaml: |
    auth_enabled: false
    chunk_store_config:
      max_look_back_period: 0s
    ingester:
      chunk_block_size: 262144
      chunk_idle_period: 3m
      chunk_retain_period: 1m
      lifecycler:
        ring:
          kvstore:
            store: inmemory
          replication_factor: 1
      max_transfer_retries: 0
    limits_config:
      enforce_metric_name: false
      reject_old_samples: true
      reject_old_samples_max_age: 168h
    schema_config:
      configs:
      - from: "2020-10-15"
        index:
          period: 168h
          prefix: index_
        object_store: filesystem
        schema: v9
        store: boltdb
    server:
      http_listen_port: 3100
    storage_config:
      boltdb:
        directory: /data/loki/index
      filesystem:
        directory: /data/loki/chunks
    table_manager:
      retention_deletes_enabled: false
      retention_period: 0s

---
# Source: loki-stack/charts/promtail/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: loki-promtail
  namespace: bcnc-loki
  labels:
    app: promtail
data:
  promtail.yaml: |
    client:
      backoff_config:
        max_period: 5m
        max_retries: 10
        min_period: 500ms
      batchsize: 1048576
      batchwait: 1s
      external_labels: {}
      timeout: 10s
    positions:
      filename: /run/promtail/positions.yaml
    server:
      http_listen_port: 3101
    target_config:
      sync_period: 10s
    scrape_configs:
    - job_name: kubernetes-pods-name
      pipeline_stages:
        - docker: {}
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - source_labels:
        - __meta_kubernetes_pod_label_name
        target_label: __service__
      - source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: __host__
      - action: drop
        regex: ''
        source_labels:
        - __service__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - __meta_kubernetes_namespace
        - __service__
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: container
      - replacement: /var/log/containers/$1*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: __path__
    - job_name: kubernetes-pods-app
      pipeline_stages:
        - docker: {}
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: drop
        regex: .+
        source_labels:
        - __meta_kubernetes_pod_label_name
      - source_labels:
        - __meta_kubernetes_pod_label_app
        target_label: __service__
      - source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: __host__
      - action: drop
        regex: ''
        source_labels:
        - __service__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - __meta_kubernetes_namespace
        - __service__
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: container
      - replacement: /var/log/containers/$1*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: __path__
    - job_name: kubernetes-pods-direct-controllers
      pipeline_stages:
        - docker: {}
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: drop
        regex: .+
        separator: ''
        source_labels:
        - __meta_kubernetes_pod_label_name
        - __meta_kubernetes_pod_label_app
      - action: drop
        regex: '[0-9a-z-.]+-[0-9a-f]{8,10}'
        source_labels:
        - __meta_kubernetes_pod_controller_name
      - source_labels:
        - __meta_kubernetes_pod_controller_name
        target_label: __service__
      - source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: __host__
      - action: drop
        regex: ''
        source_labels:
        - __service__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - __meta_kubernetes_namespace
        - __service__
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: container
      - replacement: /var/log/containers/$1*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: __path__
    - job_name: kubernetes-pods-indirect-controller
      pipeline_stages:
        - docker: {}
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: drop
        regex: .+
        separator: ''
        source_labels:
        - __meta_kubernetes_pod_label_name
        - __meta_kubernetes_pod_label_app
      - action: keep
        regex: '[0-9a-z-.]+-[0-9a-f]{8,10}'
        source_labels:
        - __meta_kubernetes_pod_controller_name
      - action: replace
        regex: '([0-9a-z-.]+)-[0-9a-f]{8,10}'
        source_labels:
        - __meta_kubernetes_pod_controller_name
        target_label: __service__
      - source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: __host__
      - action: drop
        regex: ''
        source_labels:
        - __service__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - __meta_kubernetes_namespace
        - __service__
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: container
      - replacement: /var/log/containers/$1*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: __path__
    - job_name: kubernetes-pods-static
      pipeline_stages:
        - docker: {}
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: drop
        regex: ''
        source_labels:
        - __meta_kubernetes_pod_annotation_kubernetes_io_config_mirror
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_label_component
        target_label: __service__
      - source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: __host__
      - action: drop
        regex: ''
        source_labels:
        - __service__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - __meta_kubernetes_namespace
        - __service__
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: container
      - replacement: /var/log/containers/$1*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_annotation_kubernetes_io_config_mirror
        - __meta_kubernetes_pod_name
        target_label: __path__
---
# Source: loki-stack/templates/datasources.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: loki-loki-stack
  namespace: bcnc-loki
  labels:
    app: loki-stack
data:
  loki-stack-datasource.yaml: |-
    apiVersion: 1
    datasources:
    - name: Loki
      type: loki
      access: proxy
      url: http://loki:3100
      version: 1

role-bindings.yaml

---
# Source: loki-stack/charts/loki/templates/podsecuritypolicy.yaml
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: loki
  labels:
    app: loki
spec:
  privileged: false
  allowPrivilegeEscalation: false
  volumes:
    - 'configMap'
    - 'emptyDir'
    - 'persistentVolumeClaim'
    - 'secret'
    - 'projected'
    - 'downwardAPI'
  hostNetwork: false
  hostIPC: false
  hostPID: false
  runAsUser:
    rule: 'MustRunAsNonRoot'
  seLinux:
    rule: 'RunAsAny'
  supplementalGroups:
    rule: 'MustRunAs'
    ranges:
    - min: 1
      max: 65535
  fsGroup:
    rule: 'MustRunAs'
    ranges:
    - min: 1
      max: 65535
  readOnlyRootFilesystem: true
  requiredDropCapabilities:
    - ALL
---
# Source: loki-stack/charts/promtail/templates/podsecuritypolicy.yaml
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: loki-promtail
  labels:
    app: promtail
spec:
  allowPrivilegeEscalation: false
  fsGroup:
    rule: RunAsAny
  hostIPC: false
  hostNetwork: false
  hostPID: false
  privileged: false
  readOnlyRootFilesystem: true
  requiredDropCapabilities:
  - ALL
  runAsUser:
    rule: RunAsAny
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  volumes:
  - secret
  - configMap
  - hostPath
  - projected
  - downwardAPI
  - emptyDir
---
# Source: loki-stack/charts/loki/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app: loki
  annotations:
    {}
  name: loki
  namespace: bcnc-loki
---
# Source: loki-stack/charts/promtail/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app: promtail
  name: loki-promtail
  namespace: bcnc-loki

---
# Source: loki-stack/charts/promtail/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app: promtail
  name: loki-promtail-clusterrole
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - nodes
  - nodes/proxy
  - services
  - endpoints
  - pods
  verbs: ["get", "watch", "list"]
- apiGroups:
  - security.openshift.io
  resourceNames:
  - hostmount-anyuid
  - privileged
  resources:
  - securitycontextconstraints
  verbs:
  - use

---
# Source: loki-stack/charts/promtail/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: loki-promtail-clusterrolebinding
  labels:
    app: promtail
subjects:
  - kind: ServiceAccount
    name: loki-promtail
    namespace: bcnc-loki
roleRef:
  kind: ClusterRole
  name: loki-promtail-clusterrole
  apiGroup: rbac.authorization.k8s.io

---
# Source: loki-stack/charts/loki/templates/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: loki
  namespace: bcnc-loki
  labels:
    app: loki
rules:
- apiGroups:      ['extensions']
  resources:      ['podsecuritypolicies']
  verbs:          ['use']
  resourceNames:  [loki]

---
# Source: loki-stack/charts/promtail/templates/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: loki-promtail
  namespace: bcnc-loki
  labels:
    app: promtail
rules:
- apiGroups:      ['extensions']
  resources:      ['podsecuritypolicies']
  verbs:          ['use']
  resourceNames:  [loki-promtail]

---
# Source: loki-stack/charts/loki/templates/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: loki
  namespace: bcnc-loki
  labels:
    app: loki
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: loki
subjects:
- kind: ServiceAccount
  name: loki

---
# Source: loki-stack/charts/promtail/templates/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: loki-promtail
  namespace: bcnc-loki
  labels:
    app: promtail
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: loki-promtail
subjects:
- kind: ServiceAccount
  name: loki-promtail

loki-template.yaml

# Source: loki-stack/charts/loki/templates/service-headless.yaml
apiVersion: v1
kind: Service
metadata:
  name: loki-headless
  namespace: bcnc-loki
  labels:
    app: loki
spec:
  ports:
    - port: 3100
      protocol: TCP
      name: http-metrics
      targetPort: http-metrics
  selector:
    app: loki
---
# Source: loki-stack/charts/loki/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: loki
  namespace: bcnc-loki
  labels:
    app: loki
  annotations: {}
spec:
  ports:
    - port: 3100
      protocol: TCP
      name: http-metrics
      targetPort: http-metrics
  selector:
    app: loki
---

# Source: loki-stack/charts/loki/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: loki
  namespace: bcnc-loki
  labels:
    app: loki
  annotations: {}
spec:
  podManagementPolicy: OrderedReady
  replicas: 1
  selector:
    matchLabels:
      app: loki
  serviceName: loki-headless
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: loki
      annotations:
        prometheus.io/port: http-metrics
        prometheus.io/scrape: "true"
    spec:
      serviceAccountName: loki
      securityContext:
        runAsNonRoot: true
        # fsGroup: 10001
        # runAsGroup: 10001
        # runAsUser: 10001
        # fsGroup: 1000420000
        # runAsGroup: 1000420000
        # runAsUser: 1000420000
        fsGroup: 1000280000
        runAsGroup: 1000280000
        runAsUser: 1000280000
      initContainers: []
      containers:
        - name: loki
          image: "grafana/loki:1.6.0"
          imagePullPolicy: IfNotPresent
          args:
            - "-config.file=/etc/loki/loki.yaml"
          volumeMounts:
            - name: config
              mountPath: /etc/loki
            - name: storage
              mountPath: "/data"
              subPath: 
          ports:
            - name: http-metrics
              containerPort: 3100
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /ready
              port: http-metrics
            initialDelaySeconds: 45
          readinessProbe:
            httpGet:
              path: /ready
              port: http-metrics
            initialDelaySeconds: 45
          resources: {}
          securityContext:
            readOnlyRootFilesystem: true
          env:
      nodeSelector: {}
      affinity: {}
      tolerations: []
      terminationGracePeriodSeconds: 4800
      volumes:
        - name: config
          configMap:
            name: loki
        - name: storage
          emptyDir: {}

promtail-tempalte.yaml

# Source: loki-stack/charts/promtail/templates/daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: loki-promtail
  namespace: bcnc-loki
  labels:
    app: promtail
  annotations: {}
spec:
  selector:
    matchLabels:
      app: promtail
  updateStrategy: {}
  template:
    metadata:
      labels:
        app: promtail
      annotations:
        checksum/config: eff89b5b226a275507402f14333bbe6406a6051d3e6730b175565df44c675682
        prometheus.io/port: http-metrics
        prometheus.io/scrape: "true"
    spec:
      serviceAccountName: loki-promtail
      containers:
        - name: promtail
          image: "grafana/promtail:1.6.0"
          imagePullPolicy: IfNotPresent
          args:
            - "-config.file=/etc/promtail/promtail.yaml"
            - "-client.url=http://loki:3100/loki/api/v1/push"
            - "-log.level=debug"
          volumeMounts:
            - name: config
              mountPath: /etc/promtail
            - name: run
              mountPath: /run/promtail
            - mountPath: /var/lib/docker/containers
              name: docker
              readOnly: true
            - mountPath: /var/log/containers
              name: containers
              readOnly: true
            - mountPath: /var/log/pods
              name: pods
              readOnly: true
          env:
            - name: HOSTNAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          ports:
            - containerPort: 3101
              name: http-metrics
          securityContext:
            privileged: true
            readOnlyRootFilesystem: true
            runAsGroup: 0
            runAsUser: 0
          readinessProbe:
            failureThreshold: 5
            httpGet:
              path: /ready
              port: http-metrics
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          resources: {}
      nodeSelector: {}
      affinity: {}
      tolerations:
        - effect: NoSchedule
          key: node-role.kubernetes.io/master
          operator: Exists
      volumes:
        - name: config
          configMap:
            name: loki-promtail
        - name: run
          hostPath:
            path: /run/promtail
        - hostPath:
            path: /var/lib/docker/containers
          name: docker
        - hostPath:
            path: /var/log/containers
          name: containers
        - hostPath:
            path: /var/log/pods
          name: pods
MastanaGuru commented 3 years ago

Hello, any updates on the documentation for OCP 4.x ?

yasharne commented 3 years ago

Thanks @ST-DDT, worked for me but privileged: true is dangerous, are there any equivalent capabilities?

GrafanaWriter commented 1 year ago

@JStickler - can you please review and assess for inclusion into the Loki docs?

JStickler commented 1 year ago

@GrafanaWriter, since OpenShift 3.x is out of support (https://access.redhat.com/support/policy/updates/openshift#dates) if we address this, it would be against OpenShift 4.x. I'll take a look once I've had a little bit more time to get familiar with things.

I do know that some of the Red Hat Openshift docs team was working on Loki docs. (Link for those following this issue....) https://docs.openshift.com/container-platform/4.11/logging/cluster-logging-loki.html

abrennan89 commented 10 months ago

The documentation for installing Loki on OpenShift tends to be centered around the Loki Operator, so this is the path that I would recommend.

We have some docs at the moment for how to do this using the CLI as well as the web console (see https://docs.openshift.com/container-platform/4.14/logging/cluster-logging-loki.html#logging-loki-cli-install_cluster-logging-loki for the latest docs).

I am working on a plan to get Loki Operator docs into the https://grafana.com/docs/loki/latest/ docs. @JStickler I will reach out to you soon on this once I have confirmation on the desired structure from our team.

JStickler commented 10 months ago

Thanks for the update @abrennan89, looking forward to collaborating with you again!

abrennan89 commented 9 months ago

Latest docs on installing Loki on OCP https://docs.openshift.com/container-platform/4.14/logging/log_storage/installing-log-storage.html