fabric8io / gofabric8

CLI used when working with fabric8 running on Kubernetes or OpenShift
https://fabric8.io/
Apache License 2.0
147 stars 72 forks source link

Issue: Management : fluentd, ElasticSearch #293

Closed antifragileer closed 8 years ago

antifragileer commented 8 years ago

This is reference to issue: https://github.com/fabric8io/gofabric8/issues/281

I manually fixed up the volume and volume mounts in the fluentd daemon as the default fluentd config is not setup to work on google.

I have management installed in one of the environments "dev-testing". All pods are up and running.

The fluentd pods started working. However on examination of the pod log files, I get the following errors:

kubectl -n dev-testing logs fluentd-8kk5k
2016-11-21 21:18:43 +0000 [info]: reading config file path="/etc/fluent/fluent.conf"
2016-11-21 21:18:43 +0000 [info]: starting fluentd-0.14.8
2016-11-21 21:18:43 +0000 [info]: spawn command to main: /opt/rh/rh-ruby23/root/usr/bin/ruby -Eascii-8bit:ascii-8bit /usr/bin/fluentd --under-supervisor
2016-11-21 21:18:47 +0000 [info]: reading config file path="/etc/fluent/fluent.conf"
2016-11-21 21:18:47 +0000 [info]: starting fluentd-0.14.8 without supervision
2016-11-21 21:18:47 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '1.7.0'
2016-11-21 21:18:47 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '0.26.2'
2016-11-21 21:18:47 +0000 [info]: gem 'fluent-plugin-prometheus' version '0.2.1'
2016-11-21 21:18:47 +0000 [info]: gem 'fluentd' version '0.14.8'
2016-11-21 21:18:47 +0000 [info]: adding filter pattern="kubernetes.**" type="kubernetes_metadata"
2016-11-21 21:18:51 +0000 [info]: adding filter pattern="**" type="prometheus"
2016-11-21 21:18:51 +0000 [info]: adding match pattern="**" type="elasticsearch"
2016-11-21 21:18:53 +0000 [info]: adding source type="prometheus"
2016-11-21 21:18:53 +0000 [info]: adding source type="prometheus_monitor"
2016-11-21 21:18:53 +0000 [info]: adding source type="tail"
2016-11-21 21:18:53 +0000 [info]: using configuration file: <ROOT>
  <source>
    @type prometheus
  </source>
  <source>
    @type prometheus_monitor
  </source>
  <source>
    @type tail
    path "/var/log/containers/*.log"
    pos_file "/var/log/es-containers.log.pos"
    time_format %Y-%m-%dT%H:%M:%S.%N
    tag "kubernetes.*"
    format json
    read_from_head true
    keep_time_key true
    <parse>
      @type json
      time_format %Y-%m-%dT%H:%M:%S.%N
    </parse>
  </source>
  <filter kubernetes.**>
    @type kubernetes_metadata
    kubernetes_url "https://kubernetes.default.svc"
    verify_ssl true
    preserve_json_log true
  </filter>
  <filter **>
    @type prometheus
    <metric>
      name fluentd_records_total
      type counter
      desc The total number of records read by fluentd.
    </metric>
  </filter>
  <match **>
    @type elasticsearch
    @log_level "info"
    include_tag_key true
    time_key "time"
    host "elasticsearch"
    port 9200
    scheme "http"
    buffer_type "memory"
    buffer_chunk_limit 8m
    buffer_queue_limit 8192
    flush_interval 10s
    retry_limit 10
    disable_retry_limit 
    retry_wait 1s
    max_retry_wait 60s
    num_threads 4
    logstash_format true
    reload_connections false
    <buffer>
      flush_mode interval
      retry_type exponential_backoff
      @type memory
      flush_thread_count 4
      flush_interval 10s
      retry_forever 
      retry_max_times 10
      retry_max_interval 60s
      chunk_limit_size 8m
      queue_length_limit 8192
    </buffer>
    <parse>
      time_key time
    </parse>
    <inject>
      time_key time
      tag_key tag
    </inject>
  </match>
</ROOT>
2016-11-21 21:18:53 +0000 [warn]: parameter 'keep_time_key' in <source>
  @type tail
  path "/var/log/containers/*.log"
  pos_file "/var/log/es-containers.log.pos"
  time_format %Y-%m-%dT%H:%M:%S.%N
  tag "kubernetes.*"
  format json
  read_from_head true
  keep_time_key true
  <parse>
    @type json
    time_format %Y-%m-%dT%H:%M:%S.%N
  </parse>
</source> is not used.
2016-11-21 21:18:53 +0000 [warn]: section <parse> is not used in <match **> of elasticsearch plugin
2016-11-21 21:18:53 +0000 [info]: following tail of /var/log/containers/kube-dns-v20-xtn5f_kube-system_POD-b701bdaf505e1057c088519eb2aa14212e7c3b8eb4ae657e0097d1708d0db907.log
2016-11-21 21:18:53 +0000 [info]: following tail of /var/log/containers/heapster-v1.2.0-4260653533-c4cfz_kube-system_heapster-92fcfa3e2be4164aa6ab48218c090d1b72d5b53afabdaa42ad2dcb099a0c5f33.log
2016-11-21 21:18:53 +0000 [info]: following tail of /var/log/containers/prometheus-blackbox-expo-1820759746-4xf1t_dev-testing_blackbox-exporter-03a20711fe5ea82b26f1228718912562b8a7e26ee308253acbf67a62578e4012.log
2016-11-21 21:18:53 +0000 [info]: disable filter chain optimization because [Fluent::KubernetesMetadataFilter, Fluent::PrometheusFilter] uses `#filter_stream` method.
2016-11-21 21:18:53 +0000 [info]: following tail of /var/log/containers/heapster-v1.2.0-4260653533-c4cfz_kube-system_heapster-nanny-60434feed5ff8210a639ed5b387118672b2f6f583296f117f3f3631e695ff90b.log
2016-11-21 21:18:54 +0000 [info]: disable filter chain optimization because [Fluent::KubernetesMetadataFilter, Fluent::PrometheusFilter] uses `#filter_stream` method.
2016-11-21 21:19:21 +0000 [info]: following tail of /var/log/containers/message-gateway-474760680-yct21_dev-testing_message-gateway-8faf09a963bbd02c8eaf9ed64a27fd212288acb92377497dbe131faa3bf56d5c.log
2016-11-21 21:19:22 +0000 [info]: disable filter chain optimization because [Fluent::KubernetesMetadataFilter, Fluent::PrometheusFilter] uses `#filter_stream` method.
2016-11-21 21:19:22 +0000 [info]: following tail of /var/log/containers/nexus-2180198382-izpj1_app_nexus-c6c7e7730b604655ae1461dd2416734eb1fb96449ec15c6ae1b3ab13d244645f.log
2016-11-21 21:19:22 +0000 [info]: disable filter chain optimization because [Fluent::KubernetesMetadataFilter, Fluent::PrometheusFilter] uses `#filter_stream` method.
2016-11-21 21:19:22 +0000 [info]: following tail of /var/log/containers/prometheus-999244325-tegq9_dev-testing_prometheus-ef2f4fe60098ab595ec53b86878212679d1ec4e70584b94153a702dbaa4ed971.log
2016-11-21 21:19:22 +0000 [info]: disable filter chain optimization because [Fluent::KubernetesMetadataFilter, Fluent::PrometheusFilter] uses `#filter_stream` method.
2016-11-21 21:19:22 +0000 [info]: following tail of /var/log/containers/kube-dns-v20-xtn5f_kube-system_healthz-f6f9820c5b2a7a45b76f358aaa6f181041cdd97239776949fd7c7f26e542232e.log
2016-11-21 21:19:22 +0000 [warn]: /var/log/containers/fluentd-wknvj_dev-testing_POD-727df1060b96d96b2851c5946e11e91cda9128c813b9810cad81dc2cba14740a.log not found. Continuing without tailing it.
2016-11-21 21:19:22 +0000 [info]: following tail of /var/log/containers/fabric8-1449330595-t14x3_app_fabric8-fed0b75e6f8b4265370722222dada44680b9bdb64883d320914d6f9235ace14b.log
2016-11-21 21:19:22 +0000 [info]: disable filter chain optimization because [Fluent::KubernetesMetadataFilter, Fluent::PrometheusFilter] uses `#filter_stream` method.
2016-11-21 21:19:22 +0000 [info]: following tail of /var/log/containers/nexus-2180198382-izpj1_app_POD-91b020c69fa9ff71e501c40a0328d370a75d1f3f7fcedc02672932ec89b4f24b.log
2016-11-21 21:19:22 +0000 [info]: following tail of /var/log/containers/twitter-poller-1488348699-nxwxa_dev-testing_POD-dec8f95c1fe720094d3d80787c5dfe71f85b5abf40f21d6c03f34dfeeab73c41.log
2016-11-21 21:19:22 +0000 [info]: following tail of /var/log/containers/node-exporter-0gyqr_dev-testing_node-exporter-9eb659152cce8b13eeb850ff214fac644a60a81c5bab276353acfee7ed61775d.log
2016-11-21 21:19:22 +0000 [info]: disable filter chain optimization because [Fluent::KubernetesMetadataFilter, Fluent::PrometheusFilter] uses `#filter_stream` method.
2016-11-21 21:19:22 +0000 [info]: following tail of /var/log/containers/ms-app-3350154117-p1i90_app-production_POD-8ee901c62355021993eb67ff8a6c60084923c3227b4ad60b9630dd0b3d65d020.log
2016-11-21 21:19:22 +0000 [info]: following tail of /var/log/containers/prometheus-999244325-tegq9_dev-testing_init-fb3d1d1c9b31907c81576c7af56a5a1f2447a08c65566b38c605d2230cb21977.log
2016-11-21 21:19:22 +0000 [info]: following tail of /var/log/containers/fluentd-cloud-logging-gke-forge-paas-default-pool-c935ca99-088i_kube-system_fluentd-cloud-logging-5aa9cfea00d4dd1f5fe30ba63ec20915bb8389cc249bc86ba2ba229f08ee5e59.log
2016-11-21 21:19:22 +0000 [info]: following tail of /var/log/containers/zookeeper-3695684073-8gdrp_app-testing_POD-596db3e503fa7079955c15b59fbd6b0e240a81d65c94a71df5355c112b8ebf1c.log
2016-11-21 21:19:22 +0000 [info]: following tail of /var/log/containers/prometheus-blackbox-expo-1820759746-4xf1t_dev-testing_POD-e0b398af3d816a0e22b87319af9f854a5fda2578d8d6ec6ead949e209e0ed21e.log
2016-11-21 21:19:22 +0000 [info]: following tail of /var/log/containers/kubernetes-dashboard-v1.4.0-ycqup_kube-system_kubernetes-dashboard-edfceccab49a604be7c3dcdf91d09c6a5db3528266735801483349d5d6874e75.log
2016-11-21 21:19:22 +0000 [info]: following tail of /var/log/containers/fluentd-8kk5k_dev-testing_POD-21b42f0c60823f352bfd06701745a21116d264ceb718736b3f2f8326b2095a48.log
2016-11-21 21:19:22 +0000 [info]: following tail of /var/log/containers/kube-proxy-gke-forge-paas-default-pool-c935ca99-088i_kube-system_kube-proxy-525137bc8b8b09750dedbffa7ea4847adb7ac4da4192f359214ca109dbe8ad15.log
2016-11-21 21:19:22 +0000 [info]: following tail of /var/log/containers/zookeeper-3695684073-8gdrp_app-testing_zookeeper-2b858278540a66688256b730eec51a71c8ee66109875f7532320b9b5f65cf858.log
2016-11-21 21:19:22 +0000 [info]: disable filter chain optimization because [Fluent::KubernetesMetadataFilter, Fluent::PrometheusFilter] uses `#filter_stream` method.
2016-11-21 21:19:23 +0000 [info]: following tail of /var/log/containers/ingress-nginx-1160637730-u1iae_fabric8-system_nginx-ingress-48b3a51822120941de82c7144e786802c1f6841b33d79d0e4945e3ee16b4145d.log
2016-11-21 21:19:23 +0000 [info]: disable filter chain optimization because [Fluent::KubernetesMetadataFilter, Fluent::PrometheusFilter] uses `#filter_stream` method.
2016-11-21 21:19:53 +0000 [info]: following tail of /var/log/containers/twitter-poller-1488348699-nxwxa_dev-testing_spring-boot-2cce3908c112736c2a3b6a722db4872292588acd52eb404e3656765e6b17eb2d.log
2016-11-21 21:19:53 +0000 [info]: disable filter chain optimization because [Fluent::KubernetesMetadataFilter, Fluent::PrometheusFilter] uses `#filter_stream` method.
2016-11-21 21:19:53 +0000 [info]: following tail of /var/log/containers/ms-dev-404638559-1bf51_dev-testing_POD-f28c6de19716e52e31aeb5ee72304d4c5904ff7e8d940e44129bfdf5d77cba21.log
2016-11-21 21:19:53 +0000 [info]: following tail of /var/log/containers/message-gateway-474760680-kh170_app-staging_POD-8ac812f014d8d8052761798588cab626c953bb44ae908b4690af6d1374530a7c.log
2016-11-21 21:19:53 +0000 [info]: following tail of /var/log/containers/twitter-pesistence-3638196059-z3zk4_dev-testing_spring-boot-2e756774f4654e1495fc3589c9250f4fac6b660eed65355f0ac3f8a2c7fede73.log
2016-11-21 21:19:53 +0000 [info]: disable filter chain optimization because [Fluent::KubernetesMetadataFilter, Fluent::PrometheusFilter] uses `#filter_stream` method.
2016-11-21 21:19:53 +0000 [info]: following tail of /var/log/containers/message-broker-1045034239-k0wim_dev-testing_message-broker-2f3c4fb4d3d7c61a58fc25ae968d7f41e661054ac4b42672a52c547ccb52ae0b.log
2016-11-21 21:19:53 +0000 [info]: disable filter chain optimization because [Fluent::KubernetesMetadataFilter, Fluent::PrometheusFilter] uses `#filter_stream` method.
2016-11-21 21:19:53 +0000 [info]: following tail of /var/log/containers/kube-dns-v20-xtn5f_kube-system_dnsmasq-c6ae07bba261ec6bf0b15d168a6ab559eedc61b7291a283ffe7c4e91dd9caf3f.log
2016-11-21 21:19:53 +0000 [info]: following tail of /var/log/containers/fabric8-1449330595-t14x3_app_jenkinshift-010782d0cbae2456737316db5831a263325d22a9e337fb06f0f5844cd4703715.log
2016-11-21 21:19:53 +0000 [info]: disable filter chain optimization because [Fluent::KubernetesMetadataFilter, Fluent::PrometheusFilter] uses `#filter_stream` method.
2016-11-21 21:19:53 +0000 [info]: following tail of /var/log/containers/fabric8-docker-registry-2499738801-byb2c_app_fabric8-docker-registry-7c6a910f16324e7159a3771e40b68d0d6fd279c99dbd6031a0a97c0a857c1032.log
2016-11-21 21:19:53 +0000 [info]: disable filter chain optimization because [Fluent::KubernetesMetadataFilter, Fluent::PrometheusFilter] uses `#filter_stream` method.
2016-11-21 21:19:53 +0000 [info]: following tail of /var/log/containers/ms-dev-404638559-1bf51_dev-testing_spring-boot-e1dfc8796739456da468bde28315756b25a4c57137cf7808db1c82b22a5d3527.log
2016-11-21 21:19:53 +0000 [info]: disable filter chain optimization because [Fluent::KubernetesMetadataFilter, Fluent::PrometheusFilter] uses `#filter_stream` method.
2016-11-21 21:19:53 +0000 [info]: following tail of /var/log/containers/kubernetes-dashboard-v1.4.0-ycqup_kube-system_POD-8665d9e05e68973f4f9f5bf40eaad82b9571b90f47bb32390655120db8028a5c.log
2016-11-21 21:19:53 +0000 [info]: following tail of /var/log/containers/prometheus-999244325-tegq9_dev-testing_configmap-reload-29a27ab0ca9af76806cb55d9741e1916acb19795943a89224dea832f99748417.log
2016-11-21 21:19:53 +0000 [info]: following tail of /var/log/containers/message-gateway-474760680-yct21_dev-testing_POD-cf9d05e1c28cac2b92bf948cbda81b7c2f3f8a6f7258ae27f2ef5d2312848a25.log
2016-11-21 21:19:53 +0000 [info]: following tail of /var/log/containers/twitter-pesistence-3638196059-z3zk4_dev-testing_POD-04a694aa0a8cd81a76d32fe1ef7b6e5ec6b5995f7ca8fc50d0111ca09b14a105.log
2016-11-21 21:19:53 +0000 [info]: following tail of /var/log/containers/zookeeper-3695684073-6k0lv_app-production_zookeeper-6781c26f0086112bac6489df21461136b9150044bbfbc306a7f8f9cc5a5d3479.log
2016-11-21 21:19:53 +0000 [info]: disable filter chain optimization because [Fluent::KubernetesMetadataFilter, Fluent::PrometheusFilter] uses `#filter_stream` method.
2016-11-21 21:19:54 +0000 [info]: following tail of /var/log/containers/ms-app-3350154117-p1i90_app-production_spring-boot-512da14bd9525b0b561c956b3071f6674e6e8387915483264dd32b68eecbad3a.log
2016-11-21 21:19:54 +0000 [info]: disable filter chain optimization because [Fluent::KubernetesMetadataFilter, Fluent::PrometheusFilter] uses `#filter_stream` method.
2016-11-21 21:19:54 +0000 [info]: following tail of /var/log/containers/message-broker-1045034239-3rl0r_app-staging_POD-d76d3993115df255667c3fc16525244807c1ae90ad75c73350dce6b7195e962d.log
2016-11-21 21:19:54 +0000 [info]: following tail of /var/log/containers/message-gateway-474760680-kh170_app-staging_message-gateway-19eddf29d39ca9224a5295828b360250346b2932855c4d3aabe624b1cf170d1a.log
2016-11-21 21:19:54 +0000 [info]: disable filter chain optimization because [Fluent::KubernetesMetadataFilter, Fluent::PrometheusFilter] uses `#filter_stream` method.
2016-11-21 21:19:54 +0000 [info]: following tail of /var/log/containers/fluentd-8kk5k_dev-testing_fluentd-fe198dca7db9e6562df98e67b5b1ba797ba903c1eaf4e1295b1f3dafaba0c273.log
2016-11-21 21:19:54 +0000 [info]: disable filter chain optimization because [Fluent::KubernetesMetadataFilter, Fluent::PrometheusFilter] uses `#filter_stream` method.
2016-11-21 21:19:54 +0000 [info]: following tail of /var/log/containers/fluentd-cloud-logging-gke-forge-paas-default-pool-c935ca99-088i_kube-system_POD-9db243f61a90d6031917f9fcfe9abb43e8e907a9c70183110e67dcdbdd259f2e.log
2016-11-21 21:19:54 +0000 [info]: following tail of /var/log/containers/node-exporter-0gyqr_dev-testing_POD-101b201c1bcf0234df6a503fdfcc0b6ff3daa029fce6640de03058ace8148f02.log
2016-11-21 21:19:54 +0000 [info]: following tail of /var/log/containers/heapster-v1.2.0-4260653533-c4cfz_kube-system_POD-71469a62061400381a33164957ef57432be437cbe43a6d46ddf0bec700239e11.log
2016-11-21 21:19:54 +0000 [info]: following tail of /var/log/containers/ms-app-3350154117-4s8as_app-testing_POD-3e299c7d2f45d077cc3cde247e3fe46fe00a41ffad7d862ebd8f28d29230546a.log
2016-11-21 21:19:54 +0000 [info]: following tail of /var/log/containers/prometheus-999244325-tegq9_dev-testing_POD-fa46753da565ab6af7ab888f3753c7666d4d424d0335412abc19a5d98e2d9f31.log
2016-11-21 21:19:54 +0000 [info]: following tail of /var/log/containers/ingress-nginx-1160637730-u1iae_fabric8-system_POD-08af50918fea6931d0497e6a5edd5778069f74ea1fb7c5390636749ce37ddc33.log
2016-11-21 21:19:54 +0000 [info]: following tail of /var/log/containers/nexus-2180198382-izpj1_app_init-201a2825123bdc09d62c6ff7f3d2000cc2fd4e1cd4386e7a95579d664dd06753.log
2016-11-21 21:19:54 +0000 [info]: following tail of /var/log/containers/zookeeper-3695684073-6k0lv_app-production_POD-b4cbeca2134b1c084bc050225d278d4702ac12f021d91a1754768cf9946d8b68.log
2016-11-21 21:19:54 +0000 [info]: following tail of /var/log/containers/message-broker-1045034239-k0wim_dev-testing_POD-c15281ec6251cc5fc47dea2352eb92453398b9ec39bff6314327935ec54fd67e.log
2016-11-21 21:19:54 +0000 [info]: following tail of /var/log/containers/fabric8-docker-registry-2499738801-byb2c_app_POD-08abc4d5832c1b52e3ea80b06735874d74de1fd771f4971e46b8a10152fc1f59.log
2016-11-21 21:19:54 +0000 [info]: following tail of /var/log/containers/message-broker-1045034239-3rl0r_app-staging_init-a8ddd91d36a5364bd178fd7e10ae95421b3d4eb1c4123532fcd3d3a4ca83a848.log
2016-11-21 21:19:54 +0000 [info]: following tail of /var/log/containers/fabric8-1449330595-t14x3_app_POD-fc7b4f4145dbb3116061deb286ee130e83b2dd08cf7d2af80ee3e653b6ca01aa.log
2016-11-21 21:19:54 +0000 [info]: following tail of /var/log/containers/ms-app-3350154117-4s8as_app-testing_spring-boot-1b3cb60199c7f2178ddc125bb37a6914dce69eca1a3dc23bb7ba86568dbf7038.log
2016-11-21 21:19:54 +0000 [info]: disable filter chain optimization because [Fluent::KubernetesMetadataFilter, Fluent::PrometheusFilter] uses `#filter_stream` method.
2016-11-21 21:19:54 +0000 [info]: following tail of /var/log/containers/kube-proxy-gke-forge-paas-default-pool-c935ca99-088i_kube-system_POD-45b39eb316b82366263d26de14c1444c81251578463d11263293034a26f2d864.log
2016-11-21 21:19:54 +0000 [info]: following tail of /var/log/containers/message-broker-1045034239-k0wim_dev-testing_init-7613fbd2020fe4648ca5dfed2f9788360b68cb5bdd5cf40acc2b2c1f31b5e401.log
2016-11-21 21:19:54 +0000 [info]: following tail of /var/log/containers/kube-dns-v20-xtn5f_kube-system_kubedns-ccd07a39de129513deb5eafa10033116199250f8266e8c6a46af0e308ba7d057.log
2016-11-21 21:19:54 +0000 [info]: following tail of /var/log/containers/message-broker-1045034239-3rl0r_app-staging_message-broker-2d99c7a35312916d0d685622a56810503d65d6f51fa8b719b4a6e5cad44c5302.log
2016-11-21 21:19:54 +0000 [info]: disable filter chain optimization because [Fluent::KubernetesMetadataFilter, Fluent::PrometheusFilter] uses `#filter_stream` method.
2016-11-21 21:19:54 +0000 [warn]: super was not called in #start: called it forcedly plugin=Fluent::PrometheusMonitorInput
2016-11-21 21:19:54 +0000 [warn]: super was not called in #start: called it forcedly plugin=Fluent::PrometheusInput
2016-11-21 21:20:06 +0000 [info]: [Fluent::ElasticsearchOutput] Connection opened to Elasticsearch cluster => {:host=>"elasticsearch", :port=>9200, :scheme=>"http"}
2016-11-21 21:20:06 +0000 [info]: [Fluent::ElasticsearchOutput] Connection opened to Elasticsearch cluster => {:host=>"elasticsearch", :port=>9200, :scheme=>"http"}
2016-11-21 21:20:07 +0000 [info]: [Fluent::ElasticsearchOutput] Connection opened to Elasticsearch cluster => {:host=>"elasticsearch", :port=>9200, :scheme=>"http"}
2016-11-21 21:20:08 +0000 [info]: [Fluent::ElasticsearchOutput] Connection opened to Elasticsearch cluster => {:host=>"elasticsearch", :port=>9200, :scheme=>"http"}
2016-11-21 21:20:59 +0000 [warn]: [Fluent::ElasticsearchOutput] Could not push logs to Elasticsearch, resetting connection and trying again. read timeout reached
2016-11-21 21:20:59 +0000 [info]: [Fluent::ElasticsearchOutput] Connection opened to Elasticsearch cluster => {:host=>"elasticsearch", :port=>9200, :scheme=>"http"}
2016-11-21 21:20:59 +0000 [warn]: [Fluent::ElasticsearchOutput] Could not push logs to Elasticsearch, resetting connection and trying again. read timeout reached
2016-11-21 21:21:00 +0000 [info]: [Fluent::ElasticsearchOutput] Connection opened to Elasticsearch cluster => {:host=>"elasticsearch", :port=>9200, :scheme=>"http"}
2016-11-21 21:21:15 +0000 [warn]: [Fluent::ElasticsearchOutput] Could not push logs to Elasticsearch, resetting connection and trying again. read timeout reached
2016-11-21 21:21:16 +0000 [info]: [Fluent::ElasticsearchOutput] Connection opened to Elasticsearch cluster => {:host=>"elasticsearch", :port=>9200, :scheme=>"http"}
2016-11-21 21:21:25 +0000 [warn]: [Fluent::ElasticsearchOutput] Could not push logs to Elasticsearch, resetting connection and trying again. read timeout reached
2016-11-21 21:21:26 +0000 [info]: [Fluent::ElasticsearchOutput] Connection opened to Elasticsearch cluster => {:host=>"elasticsearch", :port=>9200, :scheme=>"http"}
2016-11-21 21:21:44 +0000 [warn]: [Fluent::ElasticsearchOutput] Could not push logs to Elasticsearch, resetting connection and trying again. read timeout reached
2016-11-21 21:21:44 +0000 [info]: [Fluent::ElasticsearchOutput] Connection opened to Elasticsearch cluster => {:host=>"elasticsearch", :port=>9200, :scheme=>"http"}
2016-11-21 21:21:54 +0000 [info]: following tail of /var/log/containers/twitter-poller-1488348699-nxwxa_dev-testing_spring-boot-58b3266bde1e25091b68b4a0a963d43d66b8f43a29728e54f7a764983fc4817e.log
2016-11-21 21:21:54 +0000 [info]: disable filter chain optimization because [Fluent::KubernetesMetadataFilter, Fluent::PrometheusFilter] uses `#filter_stream` method.
2016-11-21 21:21:57 +0000 [info]: detected rotation of /var/log/containers/twitter-poller-1488348699-nxwxa_dev-testing_spring-boot-2cce3908c112736c2a3b6a722db4872292588acd52eb404e3656765e6b17eb2d.log; waiting 5 seconds
2016-11-21 21:22:09 +0000 [warn]: [Fluent::ElasticsearchOutput] Could not push logs to Elasticsearch, resetting connection and trying again. read timeout reached
2016-11-21 21:22:10 +0000 [info]: [Fluent::ElasticsearchOutput] Connection opened to Elasticsearch cluster => {:host=>"elasticsearch", :port=>9200, :scheme=>"http"}
2016-11-21 21:22:20 +0000 [warn]: [Fluent::ElasticsearchOutput] Could not push logs to Elasticsearch, resetting connection and trying again. read timeout reached
2016-11-21 21:22:21 +0000 [info]: [Fluent::ElasticsearchOutput] Connection opened to Elasticsearch cluster => {:host=>"elasticsearch", :port=>9200, :scheme=>"http"}
2016-11-21 21:22:22 +0000 [warn]: [Fluent::ElasticsearchOutput] Could not push logs to Elasticsearch, resetting connection and trying again. read timeout reached
2016-11-21 21:22:23 +0000 [info]: [Fluent::ElasticsearchOutput] Connection opened to Elasticsearch cluster => {:host=>"elasticsearch", :port=>9200, :scheme=>"http"}
2016-11-21 21:22:23 +0000 [warn]: [Fluent::ElasticsearchOutput] failed to flush the buffer. plugin_id="object:17b6960" retry_time=0 next_retry=2016-11-21 21:22:24 +0000 chunk="541d6346b0e20cffaae3f50a23901e6f" error_class=Fluent::ElasticsearchOutput::ConnectionFailure error="Could not push logs to Elasticsearch after 2 retries. read timeout reached"
  2016-11-21 21:22:23 +0000 [warn]: /opt/rh/rh-ruby23/root/usr/local/share/gems/gems/fluent-plugin-elasticsearch-1.7.0/lib/fluent/plugin/out_elasticsearch.rb:343:in `rescue in send'
  2016-11-21 21:22:23 +0000 [warn]: /opt/rh/rh-ruby23/root/usr/local/share/gems/gems/fluent-plugin-elasticsearch-1.7.0/lib/fluent/plugin/out_elasticsearch.rb:333:in `send'
  2016-11-21 21:22:23 +0000 [warn]: /opt/rh/rh-ruby23/root/usr/local/share/gems/gems/fluent-plugin-elasticsearch-1.7.0/lib/fluent/plugin/out_elasticsearch.rb:318:in `write'
  2016-11-21 21:22:23 +0000 [warn]: /opt/rh/rh-ruby23/root/usr/local/share/gems/gems/fluentd-0.14.8/lib/fluent/compat/output.rb:129:in `write'
  2016-11-21 21:22:23 +0000 [warn]: /opt/rh/rh-ruby23/root/usr/local/share/gems/gems/fluentd-0.14.8/lib/fluent/plugin/output.rb:995:in `try_flush'
  2016-11-21 21:22:23 +0000 [warn]: /opt/rh/rh-ruby23/root/usr/local/share/gems/gems/fluentd-0.14.8/lib/fluent/plugin/output.rb:1188:in `flush_thread_run'
  2016-11-21 21:22:23 +0000 [warn]: /opt/rh/rh-ruby23/root/usr/local/share/gems/gems/fluentd-0.14.8/lib/fluent/plugin/output.rb:393:in `block (2 levels) in start'
  2016-11-21 21:22:23 +0000 [warn]: /opt/rh/rh-ruby23/root/usr/local/share/gems/gems/fluentd-0.14.8/lib/fluent/plugin_helper/thread.rb:66:in `block in thread_create'
2016-11-21 21:22:53 +0000 [warn]: [Fluent::ElasticsearchOutput] failed to flush the buffer. plugin_id="object:17b6960" retry_time=1 next_retry=2016-11-21 21:22:54 +0000 chunk="541d636596d23857ef131fc4eef5378e" error_class=Fluent::ElasticsearchOutput::ConnectionFailure error="Could not push logs to Elasticsearch after 2 retries. read timeout reached"
  2016-11-21 21:22:53 +0000 [warn]: /opt/rh/rh-ruby23/root/usr/local/share/gems/gems/fluent-plugin-elasticsearch-1.7.0/lib/fluent/plugin/out_elasticsearch.rb:343:in `rescue in send'
  2016-11-21 21:22:53 +0000 [warn]: /opt/rh/rh-ruby23/root/usr/local/share/gems/gems/fluent-plugin-elasticsearch-1.7.0/lib/fluent/plugin/out_elasticsearch.rb:333:in `send'
  2016-11-21 21:22:53 +0000 [warn]: /opt/rh/rh-ruby23/root/usr/local/share/gems/gems/fluent-plugin-elasticsearch-1.7.0/lib/fluent/plugin/out_elasticsearch.rb:318:in `write'
  2016-11-21 21:22:53 +0000 [warn]: /opt/rh/rh-ruby23/root/usr/local/share/gems/gems/fluentd-0.14.8/lib/fluent/compat/output.rb:129:in `write'
  2016-11-21 21:22:53 +0000 [warn]: /opt/rh/rh-ruby23/root/usr/local/share/gems/gems/fluentd-0.14.8/lib/fluent/plugin/output.rb:995:in `try_flush'
  2016-11-21 21:22:53 +0000 [warn]: /opt/rh/rh-ruby23/root/usr/local/share/gems/gems/fluentd-0.14.8/lib/fluent/plugin/output.rb:1188:in `flush_thread_run'
  2016-11-21 21:22:53 +0000 [warn]: /opt/rh/rh-ruby23/root/usr/local/share/gems/gems/fluentd-0.14.8/lib/fluent/plugin/output.rb:393:in `block (2 levels) in start'
  2016-11-21 21:22:53 +0000 [warn]: /opt/rh/rh-ruby23/root/usr/local/share/gems/gems/fluentd-0.14.8/lib/fluent/plugin_helper/thread.rb:66:in `block in thread_create'
2016-11-21 21:22:58 +0000 [warn]: [Fluent::ElasticsearchOutput] Could not push logs to Elasticsearch, resetting connection and trying again. read timeout reached
2016-11-21 21:23:02 +0000 [info]: [Fluent::ElasticsearchOutput] Connection opened to Elasticsearch cluster => {:host=>"elasticsearch", :port=>9200, :scheme=>"http"}
2016-11-21 21:23:04 +0000 [warn]: [Fluent::ElasticsearchOutput] Could not push logs to Elasticsearch, resetting connection and trying again. read timeout reached
2016-11-21 21:23:06 +0000 [info]: [Fluent::ElasticsearchOutput] Connection opened to Elasticsearch cluster => {:host=>"elasticsearch", :port=>9200, :scheme=>"http"}
2016-11-21 21:23:06 +0000 [warn]: [Fluent::ElasticsearchOutput] Could not push logs to Elasticsearch, resetting connection and trying again. read timeout reached
2016-11-21 21:23:08 +0000 [info]: [Fluent::ElasticsearchOutput] Connection opened to Elasticsearch cluster => {:host=>"elasticsearch", :port=>9200, :scheme=>"http"}
2016-11-21 21:23:34 +0000 [warn]: [Fluent::ElasticsearchOutput] Could not push logs to Elasticsearch, resetting connection and trying again. read timeout reached
2016-11-21 21:23:36 +0000 [info]: [Fluent::ElasticsearchOutput] Connection opened to Elasticsearch cluster => {:host=>"elasticsearch", :port=>9200, :scheme=>"http"}
2016-11-21 21:23:39 +0000 [warn]: [Fluent::ElasticsearchOutput] failed to flush the buffer. plugin_id="object:17b6960" retry_time=2 next_retry=2016-11-21 21:23:41 +0000 chunk="541d637552def14fb86cf6cc86e7bce9" error_class=Fluent::ElasticsearchOutput::ConnectionFailure error="Could not push logs to Elasticsearch after 2 retries. read timeout reached"
  2016-11-21 21:23:39 +0000 [warn]: /opt/rh/rh-ruby23/root/usr/local/share/gems/gems/fluent-plugin-elasticsearch-1.7.0/lib/fluent/plugin/out_elasticsearch.rb:343:in `rescue in send'
  2016-11-21 21:23:39 +0000 [warn]: /opt/rh/rh-ruby23/root/usr/local/share/gems/gems/fluent-plugin-elasticsearch-1.7.0/lib/fluent/plugin/out_elasticsearch.rb:333:in `send'
  2016-11-21 21:23:39 +0000 [warn]: /opt/rh/rh-ruby23/root/usr/local/share/gems/gems/fluent-plugin-elasticsearch-1.7.0/lib/fluent/plugin/out_elasticsearch.rb:318:in `write'
  2016-11-21 21:23:39 +0000 [warn]: /opt/rh/rh-ruby23/root/usr/local/share/gems/gems/fluentd-0.14.8/lib/fluent/compat/output.rb:129:in `write'
  2016-11-21 21:23:39 +0000 [warn]: /opt/rh/rh-ruby23/root/usr/local/share/gems/gems/fluentd-0.14.8/lib/fluent/plugin/output.rb:995:in `try_flush'
  2016-11-21 21:23:39 +0000 [warn]: /opt/rh/rh-ruby23/root/usr/local/share/gems/gems/fluentd-0.14.8/lib/fluent/plugin/output.rb:1188:in `flush_thread_run'
  2016-11-21 21:23:39 +0000 [warn]: /opt/rh/rh-ruby23/root/usr/local/share/gems/gems/fluentd-0.14.8/lib/fluent/plugin/output.rb:393:in `block (2 levels) in start'
  2016-11-21 21:23:39 +0000 [warn]: /opt/rh/rh-ruby23/root/usr/local/share/gems/gems/fluentd-0.14.8/lib/fluent/plugin_helper/thread.rb:66:in `block in thread_create'

Wondering if elastic search was running ok, I dumped the logs. However the logs were large, so I dumped them to a file. Looks like Elastic Search is having a lot of issues too. Looks like maybe we need to rework "Management" runtimes to get them up and running up in google.

https://dl.dropboxusercontent.com/u/102191/tmp/elasticsearch.log.zip

antifragileer commented 8 years ago

Most of the log is about ElasticSearch being out of disk space. Looks like it ships with only 1Gi of space. How do you change this default for packages that are installed through the runtime console?

antifragileer commented 8 years ago

I am going to close this. I think most of the errors were because of out of disk space. I will open a new one if I find problems as I test it further.