Closed paynejacob closed 4 years ago
@paynejacob You need to provide the text as the feature owner. When will we get this text?
I will add the text to this ticket by eob 9/2.
How CRDS Work
The logging configuration is broken into 3 parts. The logging, flow, and output. Loggings configure the agent running on the nodes and control what files are consumed. The logs that are collected from the agent on the nodes are sent to a fluentd pod. This fluend pod can be configured with a flow. Flows control what logs are kept and what logs are ignored. Once a log has met the requirements of a flow it is sent to an output. The Output CRD configures where logs are stored. Outputs can be used with multiple flows. When a flow or outputs is created it will only apply to logs in the namespace where the flow and output was created. To aggregate logs across multiple namespaces ClusterFlows and ClusterOutputs can be used.
How Great the logging chart is
The Rancher logging chart allows users to manage all aspects of logging. When installing the chart users can select what cluster provider they are using and get detailed logs. From there logging can be configured with a few simple CRDs.
With the recent release of https://github.com/banzaicloud/logging-operator/releases/tag/chart%2Flogging-operator-logging%2F3.6.0 we should add a cluster output column to the flow table.
@paynejacob Can we add links to external documentation on specific CRDs so users can click on them to read about them?
I updated my comment with links
There should be a subtext under the overview heading that says "Powered by Banzai Cloud"
The flow and cluster flow tables should be updated with the following changes:
Flow should be called "Namespace Level Logging" and have the following columns "Namespace", "Flow", "Ouptuts", and "Output Destinations" in order.
Cluster flows should be called "Cluster Level Logging" and have the following columns "Cluster Flow", "Cluster Ouptuts", and "Output Destinations" in order.
The values of the columns should be links and direct to the detail page of the given resource.
Was this text left out from the Logging Overview page? https://github.com/rancher/dashboard/issues/1089#issuecomment-686061537 @codyrancher @paynejacob
@izaac yes. This is the new mock https://github.com/rancher/dashboard/issues/1089#issuecomment-689042874
@paynejacob do you have yaml that you can provide for the CRDs to populate what the tables would look like? is this something you can test?
I can provide some example yaml to exercise this table. I will also test it.
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterOutput
metadata:
name: dev-team-a
namespace: cattle-logging-system
spec:
elasticsearch:
host: jpayne.do.rancher.space
port: 7200
scheme: http
---
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterOutput
metadata:
name: archive
namespace: cattle-logging-system
spec:
logz:
endpoint:
url: "http://jpayne.do.rancher.space"
gzip: true
---
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterOutput
metadata:
name: security
namespace: cattle-logging-system
spec:
elasticsearch:
host: jpayne.do.rancher.space
port: 7200
scheme: http
logz:
endpoint:
url: "http://jpayne.do.rancher.space"
gzip: true
---
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterFlow
metadata:
name: app1
namespace: cattle-logging-system
spec:
outputRefs:
- dev-team-a
- archive
- security
---
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterFlow
metadata:
name: security
namespace: cattle-logging-system
spec:
outputRefs:
- security
I am getting a javascript error when adding namespace level logging:
vue.runtime.esm.js:1888 TypeError: Cannot read property 'includes' of undefined
at index.vue:63
at Array.filter (<anonymous>)
at index.vue:62
at Array.map (<anonymous>)
at f.mapFlows (index.vue:60)
at f.clusterLevelLogging (index.vue:50)
at _n.get (vue.runtime.esm.js:4479)
at _n.evaluate (vue.runtime.esm.js:4584)
at f.clusterLevelLogging (vue.runtime.esm.js:4836)
at f.n (index.vue?d1ef:1)
cluster-all.yaml
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterOutput
metadata:
name: dev-team-a
namespace: cattle-logging-system
spec:
elasticsearch:
host: jpayne.do.rancher.space
port: 7200
scheme: http
---
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterOutput
metadata:
name: archive
namespace: cattle-logging-system
spec:
logz:
endpoint:
url: "http://jpayne.do.rancher.space"
gzip: true
---
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterOutput
metadata:
name: security
namespace: cattle-logging-system
spec:
elasticsearch:
host: jpayne.do.rancher.space
port: 7200
scheme: http
logz:
endpoint:
url: "http://jpayne.do.rancher.space"
gzip: true
---
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterFlow
metadata:
name: app1
namespace: cattle-logging-system
spec:
globalOutputRefs:
- dev-team-a
- archive
- security
---
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterFlow
metadata:
name: security
namespace: cattle-logging-system
spec:
globalOutputRefs:
- security
namespace-all.yaml
apiVersion: v1
kind: Namespace
metadata:
name: dummyapp-dev
spec:
finalizers:
- kubernetes
---
apiVersion: logging.banzaicloud.io/v1beta1
kind: Output
metadata:
name: nsoutput1
namespace: dummyapp-dev
spec:
elasticsearch:
host: jpayne.do.rancher.space
port: 7200
scheme: http
ssl_verify: false
ssl_version: TLSv1_2
index_name: overviewlogtest1
---
apiVersion: logging.banzaicloud.io/v1beta1
kind: Output
metadata:
name: nsoutput2
namespace: dummyapp-dev
spec:
elasticsearch:
host: jpayne.do.rancher.space
port: 9200
scheme: http
ssl_verify: false
ssl_version: TLSv1_2
index_name: overviewlogtest2
---
apiVersion: logging.banzaicloud.io/v1beta1
kind: Flow
metadata:
name: example-flow
namespace: dummyapp-dev
spec:
globalOutputRefs:
- dev-team-a
localOutputRefs:
- nsoutput1
- nsoutput2
Rancher version: master-head (09/15/2020)
HA k8s 01c13f69cb277e037f6d58323cb0859300e8af5d
Downstream cluster is Single node DO RKE. Created by a Standard User.
Can you attempt to run this again. I can't reproduce and I believe this may have been resolved when I migrated flows to use localOutputRefs.
@codyrancher sure I will re-test it.
I see the links properly displayed but they are invalid.
@codyrancher I am getting these links that end with [Object object]
https://rancherhost/dashboard/[object%20Object]
<td data-title="Cluster Output" align="left" class="col-list-link"><span row="[object Object]" col="[object Object]"><span><a href="[object Object]" rel="nofollow noopener noreferrer" target="_blank"><!---->
archive
<!----></a><span>, </span></span><span><a href="[object Object]" rel="nofollow noopener noreferrer" target="_blank"><!---->
dev-team-a
<!----></a><span>, </span></span><span><a href="[object Object]" rel="nofollow noopener noreferrer" target="_blank"><!---->
security
<!----></a><!----></span></span></td>
Chrome, I cleaned up all my cache. And also tried Firefox with same results.
Rancher version: master-head (09/22/2020)
67e24f2 HA k8s and EKS Downstream cluster
Closing as the current open issues are filed separately
External Links: The logging dashboard will have no external links
CRD Explanations: https://banzaicloud.com/docs/one-eye/logging-operator/crds/v1beta1/
We will want a paragraph explaining how loggers, flows and outputs work together. We also want to discuss the difference between cluster and regular flow / outputs. @paynejacob will write this paragraph.
CRD Summary:
One table will contain a list of flows with a column of associated outputs listed. A second table will contain a list of cluster flows with a column of associated cluster outputs listed.