Open pbrissaud opened 2 years ago
Any update?
This config works for me
loki:
storage:
bucketNames:
chunks: example-bucket-loki-chunks
type: gcs
gcs:
chunkBufferSize: 0
requestTimeout: "10s"
enableHttp2: true
write:
extraEnv:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
extraVolumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
extraVolumes:
- name: google-cloud-key
secret:
secretName: loki-gcs-secret
read:
extraEnv:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
extraVolumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
extraVolumes:
- name: google-cloud-key
secret:
secretName: loki-gcs-secret
EDIT: removed "ruler" and "admin" buckets since they not needed in default configuration.
@xtavras , thank you so much. I now see a loki_cluster_seed.json file in my gcs storage bucket, but that's all. Should there be more? Is the bucket updated immediately, or is data backed up and saved there over regular intervals?
After one day you should see more, but to be honest I only got it working yesterday so it maybe that some stuff is still missing and I'm still learning Loki too.
the config from @xtavras is in line with what I would recommend. I might help to include the contents of the loki config configmap?
This config works for me
loki: storage: bucketNames: chunks: example-bucket-loki-chunks ruler: example-bucket-loki-ruler admin: example-bucket-loki-admin type: gcs gcs: chunkBufferSize: 0 requestTimeout: "10s" enableHttp2: true write: extraEnv: - name: GOOGLE_APPLICATION_CREDENTIALS value: /var/secrets/google/key.json extraVolumeMounts: - name: google-cloud-key mountPath: /var/secrets/google extraVolumes: - name: google-cloud-key secret: secretName: loki-gcs-secret read: extraEnv: - name: GOOGLE_APPLICATION_CREDENTIALS value: /var/secrets/google/key.json extraVolumeMounts: - name: google-cloud-key mountPath: /var/secrets/google extraVolumes: - name: google-cloud-key secret: secretName: loki-gcs-secret
Looking at implementing Loki here and navigating the docs on how best to start which hasn't been easy. Is that 3 buckets you have set up for the loki-simple-scalable deployment?
@AlHood77 you can ignore "ruler" and "admin" bucket, only "chunks" is necessary, I took them from some docs/example but never needed, ruler is better be configured as local config.
@xtavras That's great thanks! Do you think this could be set up also using workload identity instead of mounting keys? Just wondering if you have tried it that way?
Can't tell, never used it before, this work pretty good for us, we use sops
for decryption of secrets to keep it simple and portable.
Does loki-simple-scalable also support using azure storage accounts? I can only use loki-distributed with azure storage accounts.
Does loki-simple-scalable also support using azure storage accounts? I can only use loki-distributed with azure storage accounts.
hi,can you share your configuration? the loki-distributed with azure storage accounts. thank you~
It is not necessary to use GCP service account key which is not recommended at all. You can use workload identity to do that.
You can use the following script to configure your workload identity. Link: https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
Once done, note the IAM service account name :
Then in the ServiceAccount if you add the workload identity annotation it should work.
Here I have mentioned an example for the loki distributed.
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: loki-loki-distributed
labels:
helm.sh/chart: loki-distributed-0.67.0
app.kubernetes.io/name: loki-distributed
app.kubernetes.io/instance: loki
annotations:
iam.gke.io/gcp-service-account: <IAM Service Acount Name>@<ProjectId>.iam.gserviceaccount.com
automountServiceAccountToken: true
Hi,
I tried to deploy the loki-simple-scalable chart to store my logs in a GCS bucket. I use this values file :
It works (I can see the logs from a Grafana instance) but my GCS bucket is empty and the Persistent Volume is filling up.
I am really new to Loki, so I think it's a configuration issue.