rancher / dashboard

The Rancher UI
https://rancher.com
Apache License 2.0
447 stars 251 forks source link

[Logging V2] Additional Outputs #1191

Closed paynejacob closed 3 years ago

paynejacob commented 3 years ago

Add the following outputs to the logging output CRD Page.

For all outputs add TLS/CA fields in form similar to elastic.

Outputs

For any reference to secret type see: https://banzaicloud.com/docs/one-eye/logging-operator/configuration/plugins/outputs/secret/

AWS Elasticsearch

https://banzaicloud.com/docs/one-eye/logging-operator/configuration/plugins/outputs/aws_elasticsearch/ Name Key Type Default
Url endpoint.url string -
Key Id endpoint.aws_key_id secret_pair -
Secret Id endpoint.aws_sec_id secret_pair -

Azure Storage

https://banzaicloud.com/docs/one-eye/logging-operator/configuration/plugins/outputs/azurestore/ Name Key Type Default
Storage Account azure_storage_account secret_pair -
Access Key azure_storage_access_key secret_pair -
Container azure_container string -
Path path string -
Store As store_as string<gzip|json|text|lzo|lzma2> gzip

Cloudwatch

https://banzaicloud.com/docs/one-eye/logging-operator/configuration/plugins/outputs/cloudwatch/ Name Key Type Default
Key Id aws_key_id secret_pair -
Secret Id aws_sec_id secret_pair -
Endpoint endpoint string -

Datadog

https://banzaicloud.com/docs/one-eye/logging-operator/configuration/plugins/outputs/datadog/ Name Key Type Default
API Key api_key secret_pair -
Use SSL use_ssl bool -
Use Compression use_compression bool -
Host host string -

File

https://banzaicloud.com/docs/one-eye/logging-operator/configuration/plugins/outputs/file/ Name Key Type Default
Path path string -

GCS

https://banzaicloud.com/docs/one-eye/logging-operator/configuration/plugins/outputs/gcs/ Name Key Type Default
Project project string -
Credentials Json credentials_json secret_pair -
Bucket bucket string -
Path path string -
Overwrite Existing Path overwrite bool false

Kinesis Stream

https://banzaicloud.com/docs/one-eye/logging-operator/configuration/plugins/outputs/kinesis_firehose/ Name Key Type Default
Key Id aws_key_id secret_pair -
Secret Id aws_sec_id secret_pair -

LogDNA

https://banzaicloud.com/docs/one-eye/logging-operator/configuration/plugins/outputs/logdna/ Name Key Type Default
API Key api_key string -
Hostname hostname string -
App app string -

LogZ

https://banzaicloud.com/docs/one-eye/logging-operator/configuration/plugins/outputs/logz/ Name Key Type Default
Endpoint endpoint string -
Enable Compression gzip bool true

Loki

https://banzaicloud.com/docs/one-eye/logging-operator/configuration/plugins/outputs/loki/ Name Key Type Default
Url url string -
Username username secret_pair -
Password password secret_pair -
Tenant tenant string -

New Relic

https://banzaicloud.com/docs/one-eye/logging-operator/configuration/plugins/outputs/newrelic/ Name Key Type Default
API Key api_key secret_pair -
License Key license_key secret_pair -
Base URI base_uri string -

SumoLogic

https://banzaicloud.com/docs/one-eye/logging-operator/configuration/plugins/outputs/sumologic/ Name Key Type Default
Endpoint endpoint string -

Syslog

https://banzaicloud.com/docs/one-eye/logging-operator/configuration/plugins/outputs/syslog/ Name Key Type Default
host string Yes -
port int No 514
transport string No “tls”
insecure bool No false
trusted_ca_path secret No
format Format No -
buffer Buffer No -

S3

https://banzaicloud.com/docs/one-eye/logging-operator/configuration/plugins/outputs/s3/ Name Key Type Default
Key Id aws_key_id secret_pair -
Secret Id aws_sec_id secret_pair -
Endpoint s3_endpoint string -
Bucket s3_bucket string -
Path path string -
Overwrite Existing Path overwrite bool false
codyrancher commented 3 years ago

For whoever ends up testing this each of the AWS providers had different keyId/secretId so refer to the linked documentation. I also had to add a few fields that turned out to be required.

codyrancher commented 3 years ago

One extra note. I didn't add syslog, it doesn't appear to be a part of https://github.com/banzaicloud/logging-operator/blob/master/config/crd/bases/logging.banzaicloud.io_outputs.yaml.

If we still want to add it I can but I'd like to know what it's not a part of the outputs crd.

nickgerace commented 3 years ago

@codyrancher I'll ask our friends and find out. https://github.com/banzaicloud/logging-operator/issues/623#issuecomment-739071458

nickgerace commented 3 years ago

Adding upstream fix and being tracked in: https://github.com/rancher/rancher/issues/29892

nickgerace commented 3 years ago

@codyrancher: The syslog PRs have been merged.

izaac commented 3 years ago

@codyrancher we should add syslog it's required for E2E test https://github.com/rancher/rancher/issues/29892

nickgerace commented 3 years ago

@izaac I'm the blocker here at the moment. We would like to reduce the Buffer field requirements since there are an overwhelming number of choices.

nickgerace commented 3 years ago

@codyrancher: I've narrowed down to what is essential to be included on the leftmost column.

RANCHER DASHBOARD INCLUSION VARIABLE NAME TYPE REQUIRED DEFAULT DESCRIPTION
No type string No - Fluentd core bundles memory and file plugins. 3rd party plugins are also available when installed.
Yes tags string No tag,time When tag is specified as buffer chunk key, output plugin writes events into chunks separately per tags.
No path string No operator generated The path where buffer chunks are stored. The ‘*’ is replaced with random characters. It’s highly recommended to leave this default.
Yes chunk_limit_size string No - The max size of each chunks: events will be written into chunks until the size of chunks become this size
Yes chunk_limit_records int No - The max number of events that each chunks can store in it
Yes total_limit_size string No - The size limitation of this buffer plugin instance. Once the total size of stored buffer reached this threshold, all append operations will fail with error (and data will be lost)
No queue_limit_length int No - The queue length limitation of this buffer plugin instance
No chunk_full_threshold string No - The percentage of chunk size threshold for flushing. output plugin will flush the chunk when actual size reaches chunk_limit_size chunk_full_threshold (== 8MB 0.95 in default)
No queued_chunks_limit_size int No - Limit the number of queued chunks. If you set smaller flush_interval, e.g. 1s, there are lots of small queued chunks in buffer. This is not good with file buffer because it consumes lots of fd resources when output destination has a problem. This parameter mitigates such situations.
No compress string No - If you set this option to gzip, you can get Fluentd to compress data records before writing to buffer chunks.
No flush_at_shutdown bool No - The value to specify to flush/write all buffer chunks at shutdown, or not
No flush_mode string No - Default: default (equals to lazy if time is specified as chunk key, interval otherwise)lazy: No No flush/write chunks once per timekeyinterval: flush/write chunks per specified time via flush_intervalimmediate: flush/write chunks immediately after events are appended into chunks
Yes flush_interval string No - Default: 60s
No flush_thread_count int No - The number of threads of output plugins, which is used to write chunks in parallel
No flush_thread_interval string No - The sleep interval seconds of threads to wait next flush trial (when no chunks are waiting)
No flush_thread_burst_interval string No - The sleep interval seconds of threads between flushes when output plugin flushes waiting chunks next to next
No delayed_commit_timeout string No - The timeout seconds until output plugin decides that async write operation fails
No overflow_action string No - How output plugin behaves when its buffer queue is fullthrow_exception: raise exception to show this error in logblock: block processing of input plugin to emit events into that bufferdrop_oldest_chunk: drop/purge oldest chunk to accept newly incoming chunk
Maybe retry_timeout string No - The maximum seconds to retry to flush while failing, until plugin discards buffer chunks
Maybe retry_forever *bool No true If true, plugin will ignore retry_timeout and retry_max_times options and retry flushing forever
Maybe retry_max_times int No - The maximum number of times to retry to flush while failing
No retry_secondary_threshold string No - The ratio of retry_timeout to switch to use secondary while failing (Maximum valid value is 1.0)
No retry_type string No - exponential_backoff: wait seconds will become large exponentially per failuresperiodic: output plugin will retry periodically with fixed intervals (configured via retry_wait)
No retry_wait string No - Seconds to wait before next retry to flush, or constant factor of exponential backoff
No retry_exponential_backoff_base string No - The base number of exponential backoff for retries
No retry_max_interval string No - The maximum interval seconds for exponential backoff between retries while failing
No retry_randomize bool No - If true, output plugin will retry after randomized interval not to do burst retries
No disable_chunk_backup bool No - Instead of storing unrecoverable chunks in the backup directory, just discard them. This option is new in Fluentd v1.2.6.
Yes, essential timekey string Yes 10m Output plugin will flush chunks per specified time (enabled when time is specified in chunk keys)
Yes, essential timekey_wait string No 10m Output plugin writes chunks after timekey_wait seconds later after timekey expiration
Yes, essential timekey_use_utc bool No - Output plugin decides to use UTC or not to format placeholders using timekey
Maybe timekey_zone string No - The timezone (-0700 or Asia/Tokyo) string for formatting timekey placeholders
izaac commented 3 years ago

@nickgerace oh ok I didn't know that. And yes I remember I had to play with buffer values to get the logs flowing when we originally used the forwarder

https://github.com/rancher/rancher/issues/28566#issuecomment-698667911

nickgerace commented 3 years ago

@izaac Not a problem :) We were moving back and forth rapidly on this since we had to get the 3.8.2 upstream logging chart in as well.

izaac commented 3 years ago

Rancher version v2.5-head (12/14/2020) e3f5264 Rancher version master-head (12/14/2020) e971e0735

Scope

Validated that the generated spec of each value was correct and correctly saved from the UI. This done for Outputs.


Also I've done some UX checks and opened a couple of small issues separately for tracking.