Open tgelpi-bot opened 3 years ago
This problem appears to be intermittent. I find that sometimes when creating a new environment the logs are configured. Most times I have to restart the jx-build-controller pod to set long term storage.
Not sure if this issue was acknowledged. Each time a I build a new environment, I need to restart the jx-build-controller pod to set long term storage.
Hey @tgelpi-bot , is this still an issue you are seeing?
Yes this issue is still outstanding.
I just built a new GKE environment recently and checking jx-build-controller
pod logs I see:
{"timestamp":"2022-03-19T15:33:01.73282139Z","message":"long term storage for logs is not configured in cluster requirements","severity":"INFO","context":{}}
After deleting the pod when it restarts I now see in the logs:
{"timestamp":"2022-03-19T19:36:45.33661076Z","message":"long term storage for logs is being used, bucket gs://logs-jx3ggg-490a8a050b54","severity":"INFO","context":{}}
Summary
JX3/GKE/GSM environment does not store logs under GCP storage even when logging is enabled.
Steps to reproduce the behavior
Build an OOTB JX3/GKE/GSM using Terraform
Validate that logging is enabled with Octant under config map terraform-jx-requirements (default namespace)
Validate log entries under jx-requirements.yml
Validate the pod jx-build-controller logs and entry: long term storage for logs is being used, bucket gs://logs-jXXXX
Expected behavior
Log files should reside under GCP buckets
Actual behavior
Log files do not reside under GCP buckets
Workaround steps to resolve
1, Delete the current pod jx-build-controller (another pod should automatically start)
Jx version
The output of
jx version
is:Diagnostic information
The output of
jx diagnose version
is:Kubernetes cluster
Kubectl version
The output of
kubectl version --client
is:Operating system / Environment