rh-mobb / documentation

Step-by-step tutorials from Red Hat experts to help you get the most out of your Managed OpenShift cluster.
https://cloud.redhat.com/experts
Other
58 stars 105 forks source link

clf-to-azure doc points to soon deprecated Openshift Elasticsearch operator? #512

Closed justsomecorporateuser closed 2 months ago

justsomecorporateuser commented 9 months ago

Hi, document https://cloud.redhat.com/experts/aro/clf-to-azure/ provides step nr 5: "Deploy the OpenShift Elasticsearch Operator and the Red Hat OpenShift Logging Operator"

But Openshift Elasticsearch Operator is deprecated and should be replaced with Loki Operator? Elsewhere in Openshift documentation:

"The OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator."

Actually, i would not mind skipping both Elasticsearch or Loki operator. I would like to get logs out of ARO with the log forwarder. Somehow now when reading other instructions, i end up in creating bucket when installing the Loki Operator https://docs.openshift.com/container-platform/4.13/logging/log_storage/installing-log-storage.html

Am i forced to create Azure bucket (for LokiStack) for being able to get logs out of ARO to log forwarder?

andyrepton commented 9 months ago

Hi there! This is more a question for the official docs, but I can help you out here if you like. Yes, what you're looking for is possible. Something like the following would be needed:

  1. Install Cluster Logging Operator
  2. Create a LogStorage like so, only specifying the collector field (so not specifying visualization or logStore):
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
  name:  <name> 
  namespace: <namespace> 
spec:
  managementState: "Managed"
  collection:
    type: "vector"

This will install the vector daemon-set (you should see the pods start to pop up in the openshift-logging namespace)

  1. Create a ClusterLogForwarder CR like so, specifying where you'd like vector to send the logs:
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
  name: <log_forwarder_name> 
  namespace: <log_forwarder_namespace> 
spec:
  serviceAccountName: <service_account_name> 
  pipelines:
   - inputRefs:
     - <log_type> 
     outputRefs:
     - <output_name> 
  outputs:
  - name: <output_name> 
    type: <output_type> 
    url: <log_output_url> 

This should trigger vector to immediately reconfigure and start sending logs to your endpoint (you should see the pods restart with the correct config. If they do not, please check that the endpoint is in the supported list here: https://github.com/openshift/cluster-logging-operator/blob/master/api/logging/v1/output_types.go#L8

Hope this helps, and I'm working on some new how-to's for this site in my spare time. I'll leave this issue open so I can remember to crack on with it.

justsomecorporateuser commented 9 months ago

Thank you Andy! We will try that!

Best regards, Jan