getsentry / self-hosted

Sentry, feature-complete and packaged up for low-volume deployments and proofs-of-concept
https://develop.sentry.dev/self-hosted/
Other
7.83k stars 1.76k forks source link

restarting on-premise sentry components randomly #899

Closed romankydybets closed 3 years ago

romankydybets commented 3 years ago

Version Information

Version: getsentry/sentry:nightly

Steps to Reproduce

I run sentry according to the guide in the https://develop.sentry.dev/self-hosted/ after some time i find that some kafka topic got LAG and after some time some docker were restart randomly

CONTAINER ID        IMAGE                                  COMMAND                  CREATED             STATUS              PORTS                          NAMES
348f287b772f        nginx:1.16                             "nginx -g 'daemon of…"   7 hours ago         Up 7 hours          0.0.0.0:80->80/tcp             sentry_onpremise_nginx_1
ff72a83607b2        getsentry/sentry:nightly               "/etc/sentry/entrypo…"   7 hours ago         Up 16 minutes       9000/tcp                       sentry_onpremise_subscription-consumer-transactions_1
c3e624da34c9        getsentry/sentry:nightly               "/etc/sentry/entrypo…"   7 hours ago         Up 7 hours          9000/tcp                       sentry_onpremise_worker_1
8d1f122d7953        getsentry/sentry:nightly               "/etc/sentry/entrypo…"   7 hours ago         Up 16 minutes       9000/tcp                       sentry_onpremise_post-process-forwarder_1
11fb60759ffb        getsentry/sentry:nightly               "/etc/sentry/entrypo…"   7 hours ago         Up 16 minutes       9000/tcp                       sentry_onpremise_subscription-consumer-events_1
9966dadc5907        getsentry/sentry:nightly               "/etc/sentry/entrypo…"   7 hours ago         Up 7 hours          9000/tcp                       sentry_onpremise_cron_1
466bc4ab5cdf        getsentry/sentry:nightly               "/etc/sentry/entrypo…"   7 hours ago         Up 16 minutes       9000/tcp                       sentry_onpremise_ingest-consumer_1
a06a322143c7        sentry-cleanup-onpremise-local         "/entrypoint.sh '0 0…"   7 hours ago         Up 7 hours          9000/tcp                       sentry_onpremise_sentry-cleanup_1
0bf233c7af56        getsentry/sentry:nightly               "/etc/sentry/entrypo…"   7 hours ago         Up 7 hours          9000/tcp                       sentry_onpremise_web_1
ddb36898e95a        getsentry/snuba:nightly                "./docker_entrypoint…"   7 hours ago         Up 16 minutes       1218/tcp                       sentry_onpremise_snuba-replacer_1
4b69f84a3283        getsentry/snuba:nightly                "./docker_entrypoint…"   7 days ago          Up 2 days           1218/tcp                       sentry_onpremise_snuba-sessions-consumer_1
2f17b5cb99b1        getsentry/snuba:nightly                "./docker_entrypoint…"   7 days ago          Up 2 days           1218/tcp                       sentry_onpremise_snuba-outcomes-consumer_1
debe99adcb41        getsentry/snuba:nightly                "./docker_entrypoint…"   7 days ago          Up 7 hours          1218/tcp                       sentry_onpremise_snuba-api_1
d53b515e9f24        snuba-cleanup-onpremise-local          "/entrypoint.sh '*/5…"   7 days ago          Up 7 hours          1218/tcp                       sentry_onpremise_snuba-cleanup_1
f5af53dc9b87        getsentry/relay:nightly                "/bin/bash /docker-e…"   7 days ago          Up 7 hours          3000/tcp                       sentry_onpremise_relay_1
6d0cafb4c31d        getsentry/snuba:nightly                "./docker_entrypoint…"   7 days ago          Up 2 days           1218/tcp                       sentry_onpremise_snuba-transactions-consumer_1
e2605986f5ff        getsentry/snuba:nightly                "./docker_entrypoint…"   7 days ago          Up 16 minutes       1218/tcp                       sentry_onpremise_snuba-subscription-consumer-transactions_1
04a9d116b8e8        getsentry/snuba:nightly                "./docker_entrypoint…"   7 days ago          Up 7 hours          1218/tcp                       sentry_onpremise_snuba-consumer_1
adef04e06b7a        getsentry/snuba:nightly                "./docker_entrypoint…"   7 days ago          Up 16 minutes       1218/tcp                       sentry_onpremise_snuba-subscription-consumer-events_1
0a59ef43f46c        confluentinc/cp-kafka:5.5.0            "/etc/confluent/dock…"   7 days ago          Up 16 minutes       9092/tcp                       sentry_onpremise_kafka_1
b248e4be153e        memcached:1.5-alpine                   "docker-entrypoint.s…"   7 days ago          Up 7 hours          11211/tcp                      sentry_onpremise_memcached_1
8faa2b06fd5a        yandex/clickhouse-server:20.3.9.70     "/entrypoint.sh"         7 days ago          Up 7 hours          8123/tcp, 9000/tcp, 9009/tcp   sentry_onpremise_clickhouse_1
7af6697fd191        tianon/exim4                           "docker-entrypoint.s…"   7 days ago          Up 7 hours          25/tcp                         sentry_onpremise_smtp_1
92ee6ba29c77        redis:5.0-alpine                       "docker-entrypoint.s…"   7 days ago          Up 7 hours          6379/tcp                       sentry_onpremise_redis_1
6aa7ea250f3c        symbolicator-cleanup-onpremise-local   "/entrypoint.sh '55 …"   7 days ago          Up 7 hours          3021/tcp                       sentry_onpremise_symbolicator-cleanup_1
59ca62857165        confluentinc/cp-zookeeper:5.5.0        "/etc/confluent/dock…"   7 days ago          Up 7 hours          2181/tcp, 2888/tcp, 3888/tcp   sentry_onpremise_zookeeper_1
f062702881e9        getsentry/symbolicator:nightly         "/bin/bash /docker-e…"   7 days ago          Up 7 hours          3021/tcp                       sentry_onpremise_symbolicator_1
b8d0f0bca788        postgres:9.6                           "docker-entrypoint.s…"   7 days ago          Up 7 hours          5432/tcp                       sentry_onpremise_postgres_1
GROUP           TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
snuba-consumers events          0          3919            3919            0               rdkafka-e34479b9-7c04-458c-acf8-d7fba74abe9f /172.19.0.19    rdkafka

GROUP                                                     TOPIC            PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
subscriptions-commit-log-aeddc6ae8d9c11eba1d50242ac13000a snuba-commit-log 0          -               1739            -               rdkafka-f106eefe-7152-4538-9b59-f0ad734c5e88 /172.19.0.10    rdkafka

GROUP           TOPIC               PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
ingest-consumer ingest-events       0          3918            3918            0               rdkafka-b737c37e-f321-41e3-9127-c6cca257a20e /172.19.0.23    rdkafka
ingest-consumer ingest-transactions 0          -               0               -               rdkafka-b737c37e-f321-41e3-9127-c6cca257a20e /172.19.0.23    rdkafka
ingest-consumer ingest-attachments  0          -               0               -               rdkafka-b737c37e-f321-41e3-9127-c6cca257a20e /172.19.0.23    rdkafka

GROUP                                      TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
snuba-transactions-subscriptions-consumers events          0          -               3919            -               rdkafka-a6004f3a-1ba4-412b-961c-ec3a6f569ce5 /172.19.0.10    rdkafka

GROUP                                TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
snuba-events-subscriptions-consumers events          0          3918            3919            1               rdkafka-0c318586-46b4-49b1-a524-d7b0764509dd /172.19.0.16    rdkafka

GROUP                TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
snuba-post-processor events          0          3911            3919            8               rdkafka-4b1e3f10-a17d-42ff-8c71-a849b619830d /172.19.0.26    rdkafka

GROUP                                                      TOPIC            PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
snuba-post-processor:sync:b26f17dc8d9c11ebb6490242ac13001a snuba-commit-log 0          -               1739            -               rdkafka-689ac82d-122a-49f6-8cf9-e5126670608d /172.19.0.26    rdkafka

GROUP           TOPIC              PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
snuba-replacers event-replacements 0          -               2               -               rdkafka-2b06c1a1-7bc7-4379-a619-87343a69ca15 /172.19.0.20    rdkafka

GROUP                                                     TOPIC            PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
subscriptions-commit-log-aee84dea8d9c11eb83f80242ac130010 snuba-commit-log 0          -               1739            -               rdkafka-6f6fc12a-7063-4ac9-bec4-ac25b073ffec /172.19.0.16    rdkafka

GROUP                       TOPIC                             PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
query-subscription-consumer transactions-subscription-results 0          -               0               -               rdkafka-f7d35501-c1bb-496f-b13c-febca4a724e4 /172.19.0.28    rdkafka
query-subscription-consumer events-subscription-results       0          -               0               -               rdkafka-89e0c076-0d8a-4944-891c-59c1465026e0 /172.19.0.27    rdkafka

The instance where i run it have 2 CPU and 4 GB RAM

Expected Result

No restarts

Actual Result

restating containers

Logs

▶ Defining variables and helpers ...
/tmp/tmp.GnJAFzxQMI: line 6: warning: setlocale: LC_CTYPE: cannot change locale (UTF-8): No such file or directory

▶ Parsing command line ...

▶ Setting up error handling ...

▶ Checking minimum requirements ...
WARN: Recommended minimum CPU cores available to Docker is 4, found 2
WARN: Recommended minimum RAM available to Docker is 7800 MB, found 3896 MB

▶ Creating volumes for persistent storage ...
Created sentry-data.
Created sentry-postgres.
Created sentry-redis.
Created sentry-zookeeper.
Created sentry-kafka.
Created sentry-clickhouse.
Created sentry-symbolicator.

▶ Ensuring files from examples ...
Creating sentry/sentry.conf.py...
Creating sentry/config.yml...
Creating sentry/requirements.txt...
Creating symbolicator/config.yml...
Creating relay/config.yml...

▶ Generating secret key ...
Secret key written to sentry/config.yml

▶ Replacing TSDB ...

▶ Fetching and updating Docker images ...
Some service image(s) must be built from source by running:
    docker-compose build snuba-cleanup sentry-cleanup symbolicator-cleanup
nightly: Pulling from getsentry/sentry
Digest: sha256:ef04845c3560f3a180da427bfeaf0de37b6b18b513ab1e64312f237aea064d83
Status: Image is up to date for getsentry/sentry:nightly
docker.io/getsentry/sentry:nightly

▶ Building and tagging Docker images ...

smtp uses an image, skipping
memcached uses an image, skipping
redis uses an image, skipping
postgres uses an image, skipping
zookeeper uses an image, skipping
kafka uses an image, skipping
clickhouse uses an image, skipping
geoipupdate uses an image, skipping
snuba-api uses an image, skipping
snuba-consumer uses an image, skipping
snuba-outcomes-consumer uses an image, skipping
snuba-sessions-consumer uses an image, skipping
snuba-transactions-consumer uses an image, skipping
snuba-replacer uses an image, skipping
snuba-subscription-consumer-events uses an image, skipping
snuba-subscription-consumer-transactions uses an image, skipping
symbolicator uses an image, skipping
web uses an image, skipping
cron uses an image, skipping
worker uses an image, skipping
ingest-consumer uses an image, skipping
post-process-forwarder uses an image, skipping
subscription-consumer-events uses an image, skipping
subscription-consumer-transactions uses an image, skipping
relay uses an image, skipping
nginx uses an image, skipping
Building snuba-cleanup
Step 1/5 : ARG BASE_IMAGE
Step 2/5 : FROM ${BASE_IMAGE}
 ---> bf08f90718ea
Step 3/5 : RUN apt-get update && apt-get install -y --no-install-recommends cron &&     rm -r /var/lib/apt/lists/*
 ---> Running in 8ce81f8cbffa
Get:1 http://deb.debian.org/debian buster InRelease [122 kB]
Get:2 http://security.debian.org/debian-security buster/updates InRelease [65.4 kB]
Get:3 http://deb.debian.org/debian buster-updates InRelease [51.9 kB]
Get:4 http://security.debian.org/debian-security buster/updates/main amd64 Packages [268 kB]
Get:5 http://deb.debian.org/debian buster/main amd64 Packages [7907 kB]
Get:6 http://deb.debian.org/debian buster-updates/main amd64 Packages [9504 B]
Fetched 8423 kB in 2s (4876 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
The following additional packages will be installed:
  lsb-base sensible-utils
Suggested packages:
  anacron logrotate checksecurity
Recommended packages:
  default-mta | mail-transport-agent
The following NEW packages will be installed:
  cron lsb-base sensible-utils
0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded.
Need to get 143 kB of archives.
After this operation, 383 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian buster/main amd64 sensible-utils all 0.0.12 [15.8 kB]
Get:2 http://deb.debian.org/debian buster/main amd64 lsb-base all 10.2019051400 [28.4 kB]
Get:3 http://deb.debian.org/debian buster/main amd64 cron amd64 3.0pl1-134+deb10u1 [99.0 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 143 kB in 0s (10.2 MB/s)
Selecting previously unselected package sensible-utils.
(Reading database ... 6840 files and directories currently installed.)
Preparing to unpack .../sensible-utils_0.0.12_all.deb ...
Unpacking sensible-utils (0.0.12) ...
Selecting previously unselected package lsb-base.
Preparing to unpack .../lsb-base_10.2019051400_all.deb ...
Unpacking lsb-base (10.2019051400) ...
Selecting previously unselected package cron.
Preparing to unpack .../cron_3.0pl1-134+deb10u1_amd64.deb ...
Unpacking cron (3.0pl1-134+deb10u1) ...
Setting up lsb-base (10.2019051400) ...
Setting up sensible-utils (0.0.12) ...
Setting up cron (3.0pl1-134+deb10u1) ...
Adding group `crontab' (GID 101) ...
Done.
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of start.
Removing intermediate container 8ce81f8cbffa
 ---> 711963fb10b8
Step 4/5 : COPY entrypoint.sh /entrypoint.sh
 ---> 78f53573f54a
Step 5/5 : ENTRYPOINT ["/entrypoint.sh"]
 ---> Running in 9e54b181e6e2
Removing intermediate container 9e54b181e6e2
 ---> d9b39d5bb783

Successfully built d9b39d5bb783
Successfully tagged snuba-cleanup-onpremise-local:latest
Building symbolicator-cleanup
Step 1/5 : ARG BASE_IMAGE
Step 2/5 : FROM ${BASE_IMAGE}
 ---> ee6e43b3fef9
Step 3/5 : RUN apt-get update && apt-get install -y --no-install-recommends cron &&     rm -r /var/lib/apt/lists/*
 ---> Running in 6bfc908643b1
Ign:1 http://deb.debian.org/debian stretch InRelease
Get:2 http://deb.debian.org/debian stretch-updates InRelease [93.6 kB]
Get:3 http://deb.debian.org/debian stretch Release [118 kB]
Get:4 http://deb.debian.org/debian stretch Release.gpg [2410 B]
Get:5 http://deb.debian.org/debian stretch-updates/main amd64 Packages [2596 B]
Get:6 http://deb.debian.org/debian stretch/main amd64 Packages [7080 kB]
Get:7 http://security.debian.org/debian-security stretch/updates InRelease [53.0 kB]
Get:8 http://security.debian.org/debian-security stretch/updates/main amd64 Packages [661 kB]
Fetched 8011 kB in 2s (3573 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
Suggested packages:
  anacron logrotate checksecurity
Recommended packages:
  exim4 | postfix | mail-transport-agent
The following NEW packages will be installed:
  cron
0 upgraded, 1 newly installed, 0 to remove and 5 not upgraded.
Need to get 95.4 kB of archives.
After this operation, 257 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian stretch/main amd64 cron amd64 3.0pl1-128+deb9u1 [95.4 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 95.4 kB in 0s (0 B/s)
Selecting previously unselected package cron.
(Reading database ... 6661 files and directories currently installed.)
Preparing to unpack .../cron_3.0pl1-128+deb9u1_amd64.deb ...
Unpacking cron (3.0pl1-128+deb9u1) ...
Setting up cron (3.0pl1-128+deb9u1) ...
Adding group `crontab' (GID 101) ...
Done.
update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of start.
Removing intermediate container 6bfc908643b1
 ---> 6b151ae7b9b7
Step 4/5 : COPY entrypoint.sh /entrypoint.sh
 ---> 2e78b3d196ab
Step 5/5 : ENTRYPOINT ["/entrypoint.sh"]
 ---> Running in cebb49102e6e
Removing intermediate container cebb49102e6e
 ---> 6a9da3eeb68b

Successfully built 6a9da3eeb68b
Successfully tagged symbolicator-cleanup-onpremise-local:latest
Building sentry-cleanup
Step 1/5 : ARG BASE_IMAGE
Step 2/5 : FROM ${BASE_IMAGE}
 ---> 80e0b82f6f37
Step 3/5 : RUN apt-get update && apt-get install -y --no-install-recommends cron &&     rm -r /var/lib/apt/lists/*
 ---> Running in a38484e6408d
Get:1 http://deb.debian.org/debian buster InRelease [122 kB]
Get:2 http://security.debian.org/debian-security buster/updates InRelease [65.4 kB]
Get:3 http://deb.debian.org/debian buster-updates InRelease [51.9 kB]
Get:4 http://security.debian.org/debian-security buster/updates/main amd64 Packages [268 kB]
Get:5 http://deb.debian.org/debian buster/main amd64 Packages [7907 kB]
Get:6 http://deb.debian.org/debian buster-updates/main amd64 Packages [9504 B]
Fetched 8423 kB in 2s (4803 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
The following additional packages will be installed:
  lsb-base sensible-utils
Suggested packages:
  anacron logrotate checksecurity
Recommended packages:
  default-mta | mail-transport-agent
The following NEW packages will be installed:
  cron lsb-base sensible-utils
0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded.
Need to get 143 kB of archives.
After this operation, 383 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian buster/main amd64 sensible-utils all 0.0.12 [15.8 kB]
Get:2 http://deb.debian.org/debian buster/main amd64 lsb-base all 10.2019051400 [28.4 kB]
Get:3 http://deb.debian.org/debian buster/main amd64 cron amd64 3.0pl1-134+deb10u1 [99.0 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 143 kB in 0s (10.4 MB/s)
Selecting previously unselected package sensible-utils.
(Reading database ... 11935 files and directories currently installed.)
Preparing to unpack .../sensible-utils_0.0.12_all.deb ...
Unpacking sensible-utils (0.0.12) ...
Selecting previously unselected package lsb-base.
Preparing to unpack .../lsb-base_10.2019051400_all.deb ...
Unpacking lsb-base (10.2019051400) ...
Selecting previously unselected package cron.
Preparing to unpack .../cron_3.0pl1-134+deb10u1_amd64.deb ...
Unpacking cron (3.0pl1-134+deb10u1) ...
Setting up lsb-base (10.2019051400) ...
Setting up sensible-utils (0.0.12) ...
Setting up cron (3.0pl1-134+deb10u1) ...
Adding group `crontab' (GID 101) ...
Done.
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of start.
Removing intermediate container a38484e6408d
 ---> c387ea20da51
Step 4/5 : COPY entrypoint.sh /entrypoint.sh
 ---> ee1a2550159b
Step 5/5 : ENTRYPOINT ["/entrypoint.sh"]
 ---> Running in c824e3091a1e
Removing intermediate container c824e3091a1e
 ---> 8797871fdd7f

Successfully built 8797871fdd7f
Successfully tagged sentry-cleanup-onpremise-local:latest

Docker images built.

▶ Turning things off ...
Removing network onpremise_default
Network onpremise_default not found.
Removing network sentry_onpremise_default
Network sentry_onpremise_default not found.

▶ Setting up Zookeeper ...
Creating network "sentry_onpremise_default" with the default driver
Creating volume "sentry_onpremise_sentry-secrets" with default driver
Creating volume "sentry_onpremise_sentry-smtp" with default driver
Creating volume "sentry_onpremise_sentry-zookeeper-log" with default driver
Creating volume "sentry_onpremise_sentry-kafka-log" with default driver
Creating volume "sentry_onpremise_sentry-smtp-log" with default driver
Creating volume "sentry_onpremise_sentry-clickhouse-log" with default driver

▶ Bootstrapping and migrating Snuba ...
Creating sentry_onpremise_clickhouse_1 ...
Creating sentry_onpremise_zookeeper_1  ...
Creating sentry_onpremise_redis_1      ...
Creating sentry_onpremise_clickhouse_1 ... done
Creating sentry_onpremise_redis_1      ... done
Creating sentry_onpremise_zookeeper_1  ... done
Creating sentry_onpremise_kafka_1      ...
Creating sentry_onpremise_kafka_1      ... done
2021-03-18 13:31:26,977 Attempting to connect to Kafka (attempt 0)...
2021-03-18 13:31:28,993 Attempting to connect to Kafka (attempt 1)...
2021-03-18 13:31:30,996 Attempting to connect to Kafka (attempt 2)...
2021-03-18 13:31:31,025 Connected to Kafka on attempt 2
2021-03-18 13:31:31,214 Creating Kafka topics...
2021-03-18 13:31:31,896 Topic ingest-sessions created
2021-03-18 13:31:31,897 Topic cdc created
2021-03-18 13:31:31,897 Topic events created
2021-03-18 13:31:31,898 Topic event-replacements created
2021-03-18 13:31:31,898 Topic snuba-commit-log created
2021-03-18 13:31:31,898 Topic outcomes created
Starting sentry_onpremise_zookeeper_1 ...
Starting sentry_onpremise_zookeeper_1 ... done
Starting sentry_onpremise_clickhouse_1 ...
Starting sentry_onpremise_redis_1      ...
Starting sentry_onpremise_clickhouse_1 ... done
Starting sentry_onpremise_redis_1      ... done
Starting sentry_onpremise_kafka_1      ...
Starting sentry_onpremise_kafka_1      ... done
Finished running migrations

▶ Creating additional Kafka topics ...
Starting sentry_onpremise_zookeeper_1 ...
Starting sentry_onpremise_zookeeper_1 ... done
Created topic ingest-attachments.

Starting sentry_onpremise_zookeeper_1 ...
Starting sentry_onpremise_zookeeper_1 ... done
Created topic ingest-transactions.

Starting sentry_onpremise_zookeeper_1 ...
Starting sentry_onpremise_zookeeper_1 ... done
Created topic ingest-events.

▶ Ensuring proper PostgreSQL version ...

▶ Setting up database ...
Creating sentry_onpremise_smtp_1 ...
Starting sentry_onpremise_clickhouse_1 ...
Creating sentry_onpremise_postgres_1   ...
Starting sentry_onpremise_clickhouse_1 ... done
Starting sentry_onpremise_redis_1      ...
Starting sentry_onpremise_zookeeper_1  ...
Starting sentry_onpremise_redis_1      ... done
Starting sentry_onpremise_zookeeper_1  ... done
Creating sentry_onpremise_memcached_1  ...
Creating sentry_onpremise_symbolicator_1 ...
Starting sentry_onpremise_kafka_1        ...
Starting sentry_onpremise_kafka_1        ... done
Creating sentry_onpremise_snuba-sessions-consumer_1 ...
Creating sentry_onpremise_snuba-api_1               ...
Creating sentry_onpremise_snuba-transactions-consumer_1 ...
Creating sentry_onpremise_snuba-subscription-consumer-transactions_1 ...
Creating sentry_onpremise_snuba-consumer_1                           ...
Creating sentry_onpremise_snuba-subscription-consumer-events_1       ...
Creating sentry_onpremise_snuba-replacer_1                           ...
Creating sentry_onpremise_snuba-outcomes-consumer_1                  ...
Creating sentry_onpremise_memcached_1                                ... done
Creating sentry_onpremise_snuba-subscription-consumer-transactions_1 ... done
Creating sentry_onpremise_snuba-consumer_1                           ... done
Creating sentry_onpremise_snuba-sessions-consumer_1                  ... done
Creating sentry_onpremise_snuba-transactions-consumer_1              ... done
Creating sentry_onpremise_smtp_1                                     ... done
Creating sentry_onpremise_snuba-replacer_1                           ... done
Creating sentry_onpremise_snuba-api_1                                ... done
Creating sentry_onpremise_symbolicator_1                             ... done
Creating sentry_onpremise_snuba-subscription-consumer-events_1       ... done
Creating sentry_onpremise_postgres_1                                 ... done
Creating sentry_onpremise_snuba-outcomes-consumer_1                  ... done
Installing additional dependencies...

13:32:33 [WARNING] sentry.utils.geo: Error opening GeoIP database: /geoip/GeoLite2-City.mmdb
13:32:34 [WARNING] sentry.utils.geo: Error opening GeoIP database in Rust: /geoip/GeoLite2-City.mmdb
13:32:40 [INFO] sentry.plugins.github: apps-not-configured
Operations to perform:
  Apply all migrations: admin, auth, contenttypes, jira_ac, nodestore, sentry, sessions, sites, social_auth
Running migrations:
  Applying sentry.0001_initial... OK
  Applying contenttypes.0001_initial... OK
  Applying admin.0001_initial... OK
  Applying admin.0002_logentry_remove_auto_add... OK
  Applying contenttypes.0002_remove_content_type_name... OK
  Applying auth.0001_initial... OK
  Applying auth.0002_alter_permission_name_max_length... OK
  Applying auth.0003_alter_user_email_max_length... OK
  Applying auth.0004_alter_user_username_opts... OK
  Applying auth.0005_alter_user_last_login_null... OK
  Applying auth.0006_require_contenttypes_0002... OK
  Applying auth.0007_alter_validators_add_error_messages... OK
  Applying auth.0008_alter_user_username_max_length... OK
  Applying jira_ac.0001_initial... OK
  Applying nodestore.0001_initial... OK
  Applying nodestore.0002_nodestore_no_dictfield... OK
  Applying sentry.0002_912_to_recent... OK
  Applying sentry.0003_auto_20191022_0122... OK
  Applying sentry.0004_bitfieldtestmodel_blankjsonfieldtestmodel_callabledefaultmodel_jsonfieldtestmodel_jsonfieldwithdefau... OK
  Applying sentry.0005_fix_content_types... OK
  Applying sentry.0006_sentryapp_date_published... OK
  Applying sentry.0007_auto_20191029_0131... OK
  Applying sentry.0008_auto_20191030_0016... OK
  Applying sentry.0009_auto_20191101_1608... OK
  Applying sentry.0010_auto_20191104_1641... OK
  Applying sentry.0011_remove_pagerdutyservice_service_id_from_state... OK
  Applying sentry.0012_remove_pagerdutyservice_service_id... OK
  Applying sentry.0013_auto_20191111_1829... OK
  Applying sentry.0014_delete_sentryappwebhookerror... OK
  Applying sentry.0015_delete_sentryappwebhookerror_db... OK
  Applying sentry.0016_delete_alert_rule_deprecated_fields... OK
  Applying sentry.0017_incident_aggregation... OK
  Applying sentry.0018_discoversavedquery_version... OK
  Applying sentry.0019_auto_20191114_2040... OK
  Applying sentry.0020_auto_20191125_1420... OK
  Applying sentry.0021_auto_20191203_1803... OK
  Applying sentry.0021_auto_20191202_1716... OK
  Applying sentry.0022_merge... OK
  Applying sentry.0023_hide_environment_none_20191126... OK
  Applying sentry.0024_auto_20191230_2052...Nothing to do, skipping migration.

 OK
  Applying sentry.0025_organizationaccessrequest_requester... OK
  Applying sentry.0026_delete_event... OK
  Applying sentry.0027_exporteddata... OK
  Applying sentry.0028_user_reports... OK
  Applying sentry.0029_discover_query_upgrade... OK
  Applying sentry.0030_auto_20200201_0039... OK
  Applying sentry.0031_delete_alert_rules_and_incidents... OK
  Applying sentry.0032_delete_alert_email... OK
  Applying sentry.0033_auto_20200210_2137... OK
  Applying sentry.0034_auto_20200210_2311... OK
  Applying sentry.0035_auto_20200127_1711... OK
  Applying sentry.0036_auto_20200213_0106... OK
  Applying sentry.0037_auto_20200213_0140... OK
  Applying sentry.0038_auto_20200213_1904... OK
  Applying sentry.0039_delete_incidentsuspectcommit... OK
  Applying sentry.0040_remove_incidentsuspectcommittable... OK
  Applying sentry.0041_incidenttrigger_date_modified... OK
  Applying sentry.0042_auto_20200214_1607... OK
  Applying sentry.0043_auto_20200218_1903... OK
  Applying sentry.0044_auto_20200219_0018... OK
  Applying sentry.0045_remove_incidentactivity_event_stats_snapshot... OK
  Applying sentry.0046_auto_20200221_1735... OK
  Applying sentry.0047_auto_20200224_2319... OK
  Applying sentry.0048_auto_20200302_1825... OK
  Applying sentry.0049_auto_20200304_0254... OK
  Applying sentry.0050_auto_20200306_2346... OK
  Applying sentry.0051_fix_auditlog_pickled_data... OK
  Applying sentry.0052_organizationonboardingtask_completion_seen... OK
  Applying sentry.0053_migrate_alert_task_onboarding... OK
  Applying sentry.0054_create_key_transaction... OK
  Applying sentry.0055_query_subscription_status... OK
  Applying sentry.0056_remove_old_functions... OK
  Applying sentry.0057_remove_unused_project_flag... OK
  Applying sentry.0058_project_issue_alerts_targeting... OK
  Applying sentry.0059_add_new_sentry_app_features... OK
  Applying sentry.0060_add_file_eventattachment_index... OK
  Applying sentry.0061_alertrule_partial_index... OK
  Applying sentry.0062_key_transactions_unique_with_owner... OK
  Applying sentry.0063_drop_alertrule_constraint... OK
  Applying sentry.0064_project_has_transactions... OK
  Applying sentry.0065_add_incident_status_method... OK
  Applying sentry.0066_alertrule_manager... OK
  Applying sentry.0067_migrate_rules_alert_targeting... OK
  Applying sentry.0068_project_default_flags... OK
  Applying sentry.0069_remove_tracked_superusers... OK
  Applying sentry.0070_incident_snapshot_support... OK
  Applying sentry.0071_add_default_fields_model_subclass... OK
  Applying sentry.0072_alert_rules_query_changes... OK
  Applying sentry.0073_migrate_alert_query_model... OK
  Applying sentry.0074_add_metric_alert_feature... OK
  Applying sentry.0075_metric_alerts_fix_releases... OK
  Applying sentry.0076_alert_rules_disable_constraints... OK
  Applying sentry.0077_alert_query_col_drop_state... OK
  Applying sentry.0078_incident_field_updates... OK
  Applying sentry.0079_incidents_remove_query_field_state... OK
  Applying sentry.0080_alert_rules_drop_unused_tables_cols... OK
  Applying sentry.0081_add_integraiton_upgrade_audit_log... OK
  Applying sentry.0082_alert_rules_threshold_float... OK
  Applying sentry.0083_add_max_length_webhook_url... OK
  Applying sentry.0084_exported_data_blobs... OK
  Applying sentry.0085_fix_error_rate_snuba_query... OK
  Applying sentry.0086_sentry_app_installation_for_provider... OK
  Applying sentry.0087_fix_time_series_data_type... OK
  Applying sentry.0088_rule_level_resolve_threshold_type... OK
  Applying sentry.0089_rule_level_fields_backfill... OK
  Applying sentry.0090_fix_auditlog_pickled_data_take_2... OK
  Applying sentry.0091_alertruleactivity... OK
  Applying sentry.0092_remove_trigger_threshold_type_nullable... OK
  Applying sentry.0093_make_identity_user_id_textfield... OK
  Applying sentry.0094_cleanup_unreferenced_event_files... OK
  Applying sentry.0095_ruleactivity... OK
  Applying sentry.0096_sentry_app_component_skip_load_on_open... OK
  Applying sentry.0097_add_sentry_app_id_to_sentry_alertruletriggeraction... OK
  Applying sentry.0098_add-performance-onboarding... OK
  Applying sentry.0099_fix_project_platforms... OK
  Applying sentry.0100_file_type_on_event_attachment... OK
  Applying sentry.0101_backfill_file_type_on_event_attachment... OK
  Applying sentry.0102_collect_relay_analytics... OK
  Applying sentry.0103_project_has_alert_filters... OK
  Applying sentry.0104_collect_relay_public_key_usage... OK
  Applying sentry.0105_remove_nullability_of_event_attachment_type... OK
  Applying sentry.0106_service_hook_project_id_nullable... OK
  Applying sentry.0107_remove_spaces_from_slugs... OK
  Applying sentry.0108_update_fileblob_action... OK
  Applying sentry.0109_sentry_app_creator... OK
  Applying sentry.0110_sentry_app_creator_backill... OK
  Applying sentry.0111_snuba_query_event_type... OK
  Applying sentry.0112_groupinboxmodel... OK
  Applying sentry.0113_add_repositoryprojectpathconfig... OK
  Applying sentry.0114_add_unhandled_savedsearch... OK
  Applying sentry.0115_add_checksum_to_debug_file... OK
  Applying sentry.0116_backfill_debug_file_checksum... OK
  Applying sentry.0117_dummy-activityupdate... OK
  Applying sentry.0118_backfill_snuba_query_event_types... OK
  Applying sentry.0119_fix_set_none... OK
  Applying sentry.0120_commit_author_charfield... OK
GroupInbox: 100% |#                                             | ETA:  --:--:--
 OK
  Applying sentry.0122_add_release_status... OK
  Applying sentry.0123_groupinbox_addprojandorg... OK
  Applying sentry.0124_add_release_status_model... OK
  Applying sentry.0125_add_platformexternalissue_project_id... OK
  Applying sentry.0126_make_platformexternalissue_group_id_flexfk... OK
  Applying sentry.0127_backfill_platformexternalissue_project_id... OK
  Applying sentry.0128_change_dashboards... OK
  Applying sentry.0129_remove_dashboard_keys... OK
  Applying sentry.0130_remove_old_widget_models... OK
  Applying sentry.0131_drop_widget_tables... OK
  Applying sentry.0132_groupownermodel... OK
  Applying sentry.0133_dashboard_delete_object_status... OK
  Applying sentry.0134_dashboard_drop_object_status_column... OK
  Applying sentry.0135_removinguniquegroupownerconstraint... OK
  Applying sentry.0136_issue_alert_filter_all_orgs... OK
  Applying sentry.0137_dashboard_widget_interval... OK
  Applying sentry.0138_widget_query_remove_interval... OK
  Applying sentry.0139_remove_widgetquery_interval... OK
  Applying sentry.0140_subscription_checker... OK
  Applying sentry.0141_remove_widget_constraints... OK
  Applying sentry.0142_add_dashboard_tombstone... OK
  Applying sentry.0143_add_alerts_integrationfeature... OK
  Applying sentry.0144_add_publish_request_inprogress_status... OK
  Applying sentry.0145_rename_alert_rule_feature... OK
  Applying sentry.0146_backfill_members_alert_write... OK
  Applying sentry.0147_add_groupinbox_date_added_index... OK
  Applying sentry.0148_group_id_bigint... OK
  Applying sentry.0149_bigint... OK
  Applying sentry.0150_remove_userreport_eventattachment_constraints... OK
  Applying sentry.0151_add_world_map_dashboard_widget_type... OK
  Applying sentry.0152_remove_slack_workspace_orgintegrations... OK
  Applying sentry.0153_add_big_number_dashboard_widget_type... OK
  Applying sentry.0154_groupedmessage_inbox_sort... OK
  Applying sentry.0155_add_dashboard_query_orderby... OK
  Applying sentry.0156_add_mark_reviewed_activity... OK
  Applying sentry.0157_make_repositoryprojectpathconfig_organization_integration_nullable... OK
  Applying sentry.0158_create_externalteam_table... OK
  Applying sentry.0159_create_externaluser_table... OK
  Applying sentry.0160_create_projectcodeowners_table... OK
  Applying sentry.0161_add_saved_search_sort... OK
  Applying sentry.0162_backfill_saved_search_sort... OK
  Applying sentry.0163_add_organizationmember_and_external_name... OK
  Applying sentry.0164_add_protect_on_delete_codeowners... OK
  Applying sentry.0165_metric_alerts_fix_group_ids... OK
  Applying sentry.0166_create_notificationsetting_table... OK
  Applying sentry.0167_rm_organization_integration_from_projectcodeowners... OK
  Applying sentry.0168_demo_orgs_users... OK
  Applying sentry.0169_delete_organization_integration_from_projectcodeowners... OK
  Applying sentry.0170_actor_introduction... OK
  Applying sentry.0171_backfill_actors... OK
  Applying sentry.0172_rule_owner_fields... OK
  Applying sentry.0173_remove_demo_flag... OK
  Applying sentry.0174_my_issues_saved_search... OK
  Applying sentry.0175_make_targets_nullable... OK
  Applying sentry.0176_remove_targets... OK
  Applying sentry.0177_drop_targets... OK
  Applying sentry.0178_add_new_target_column... OK
  Applying sentry.0179_update_legacy_discover_saved_query_timestamps... OK
  Applying sentry.0180_add_saved_search_sorts... OK
  Applying sessions.0001_initial... OK
  Applying sites.0001_initial... OK
  Applying sites.0002_alter_domain_unique... OK
  Applying social_auth.0001_initial... OK
13:33:53 [WARNING] sentry: Cannot initiate onboarding for organization (1) due to missing owners
Created internal Sentry project (slug=internal, id=1)

Would you like to create a user account now? [Y/n]: Y
Email: test@test.com
Password:
Repeat for confirmation:
Added to organization: sentry
User created: test@test.com
Creating missing DSNs
Correcting Group.num_comments counter

▶ Migrating file storage ...
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
ba3557a56b15: Already exists
Digest: sha256:a75afd8b57e7f34e4dad8d65e2c7ba2e1975c795ce1ee22fa34f8cf46f96a3be
Status: Downloaded newer image for alpine:latest

▶ Generating Relay credentials ...
Relay credentials written to relay/credentials.json

▶ Setting up GeoIP integration ...
Setting up IP address geolocation ...
Installing (empty) IP address geolocation database ... done.
IP address geolocation is not configured for updates.
See https://develop.sentry.dev/self-hosted/geolocation/ for instructions.
Error setting up IP address geolocation.

-----------------------------------------------------------------

You're all done! Run the following command to get Sentry running:

  docker-compose up -d
BYK commented 3 years ago

Please share the logs from docker-compose logs. Install logs wouldn't provide any value for system restart that happens after the install.

BYK commented 3 years ago

Yeah, looks like your kafka instance is broken which affects all downstream instances. What is your ask from the Sentry team?

romankydybets commented 3 years ago

@BYK right now you are right. i am asking because we are using before this version 9.1.2 which is working great for the year, but we need integration with slack which was deprecated. so we migrate to the new on-premise sentry and for the last couple days what we start using it. we see some restart and unexpected behavior. i search for the issues but not find something. so i decide to open new one, maybe we will find some root cause of this. i don't expect that this issue of the machine size, sauze what i see from the monitoring that CPU is mainly idle, and RAM is using about 3.1 GB

because i start experimenting.

what i did right now. i delete and zookeeper volume as well and do the above process fully,

try to use sentry-cli to sent the message into sentry, but not lucky to do that as well.

romankydybets commented 3 years ago

and one more thing which i don't understand. this is log from kafka before i send a test message

GROUP                                                      TOPIC            PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
snuba-post-processor:sync:2e1d36f88db011eb91df0242ac150016 snuba-commit-log 0          -               0               -               rdkafka-1a77ea65-22cd-4839-90bd-b9cc6a0d3265 /172.21.0.22    rdkafka

GROUP              TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
transactions_group events          0          -               0               -               rdkafka-cb03a8bf-b3b0-4386-9468-1f37a94a2ab3 /172.21.0.18    rdkafka

GROUP           TOPIC              PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
snuba-replacers event-replacements 0          -               0               -               rdkafka-77236c8a-74ee-4314-b83d-74ecd5f796ca /172.21.0.10    rdkafka

GROUP                       TOPIC                             PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
query-subscription-consumer events-subscription-results       0          -               0               -               rdkafka-6cb73715-c4a6-4046-855c-e69c4a906670 /172.21.0.23    rdkafka
query-subscription-consumer transactions-subscription-results 0          -               0               -               rdkafka-9621c000-33da-4169-b37e-b9a56e615392 /172.21.0.21    rdkafka

GROUP           TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
snuba-consumers outcomes        0          -               0               -               rdkafka-46b6362a-d79c-46d1-a929-cfc011abc4b6 /172.21.0.16    rdkafka
snuba-consumers events          0          -               0               -               rdkafka-749c2186-f7f1-4f65-b65c-6e785f92d543 /172.21.0.17    rdkafka
snuba-consumers ingest-sessions 0          -               0               -               rdkafka-b698206c-3bee-4856-b490-f1d46df7ead5 /172.21.0.13    rdkafka

GROUP                                                     TOPIC            PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
subscriptions-commit-log-25054bf08db011eb9fb90242ac15000c snuba-commit-log 0          -               0               -               rdkafka-8832ef89-79d6-4b5b-84b7-35fdbfd66ca5 /172.21.0.12    rdkafka

GROUP           TOPIC               PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
ingest-consumer ingest-events       0          -               0               -               rdkafka-d41adde1-8f4e-45be-8f30-64fbd87cd6e3 /172.21.0.26    rdkafka
ingest-consumer ingest-transactions 0          -               0               -               rdkafka-d41adde1-8f4e-45be-8f30-64fbd87cd6e3 /172.21.0.26    rdkafka
ingest-consumer ingest-attachments  0          -               0               -               rdkafka-d41adde1-8f4e-45be-8f30-64fbd87cd6e3 /172.21.0.26    rdkafka

Consumer group 'subscriptions-commit-log-f0552ea28daf11eb955a0242ac15000f' has no active members.

GROUP                                      TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
snuba-transactions-subscriptions-consumers events          0          -               0               -               rdkafka-73ecc77a-334a-4cd2-8f02-9af5be708c37 /172.21.0.14    rdkafka

Consumer group 'subscriptions-commit-log-f01645c08daf11ebb5f10242ac15000e' has no active members.

GROUP                                                     TOPIC            PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
subscriptions-commit-log-26fe0dfc8db011eb89d90242ac15000e snuba-commit-log 0          -               0               -               rdkafka-380f667b-a728-43fa-b892-6cbd3e459218 /172.21.0.14    rdkafka

GROUP                                TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
snuba-events-subscriptions-consumers events          0          -               0               -               rdkafka-d4d61aa1-f376-44e2-9894-b4292f66108a /172.21.0.12    rdkafka

GROUP                TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
snuba-post-processor events          0          -               0               -               rdkafka-d18612a7-57bf-4b79-b5b5-e648ee0990b3 /172.21.0.22    rdkafka

this is after i sent 3 test message

GROUP           TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
snuba-consumers outcomes        0          5               5               0               rdkafka-46b6362a-d79c-46d1-a929-cfc011abc4b6 /172.21.0.16    rdkafka
snuba-consumers events          0          3               3               0               rdkafka-749c2186-f7f1-4f65-b65c-6e785f92d543 /172.21.0.17    rdkafka
snuba-consumers ingest-sessions 0          12              12              0               rdkafka-b698206c-3bee-4856-b490-f1d46df7ead5 /172.21.0.13    rdkafka

GROUP                                                     TOPIC            PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
subscriptions-commit-log-25054bf08db011eb9fb90242ac15000c snuba-commit-log 0          -               6               -               rdkafka-8832ef89-79d6-4b5b-84b7-35fdbfd66ca5 /172.21.0.12    rdkafka

GROUP           TOPIC               PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
ingest-consumer ingest-events       0          3               3               0               rdkafka-d41adde1-8f4e-45be-8f30-64fbd87cd6e3 /172.21.0.26    rdkafka
ingest-consumer ingest-transactions 0          -               0               -               rdkafka-d41adde1-8f4e-45be-8f30-64fbd87cd6e3 /172.21.0.26    rdkafka
ingest-consumer ingest-attachments  0          -               0               -               rdkafka-d41adde1-8f4e-45be-8f30-64fbd87cd6e3 /172.21.0.26    rdkafka

GROUP                                                      TOPIC            PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
snuba-post-processor:sync:2e1d36f88db011eb91df0242ac150016 snuba-commit-log 0          -               6               -               rdkafka-1a77ea65-22cd-4839-90bd-b9cc6a0d3265 /172.21.0.22    rdkafka

GROUP                                      TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
snuba-transactions-subscriptions-consumers events          0          2               3               1               rdkafka-73ecc77a-334a-4cd2-8f02-9af5be708c37 /172.21.0.14    rdkafka

GROUP                                                     TOPIC            PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
subscriptions-commit-log-26fe0dfc8db011eb89d90242ac15000e snuba-commit-log 0          -               6               -               rdkafka-380f667b-a728-43fa-b892-6cbd3e459218 /172.21.0.14    rdkafka

GROUP              TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
transactions_group events          0          3               3               0               rdkafka-cb03a8bf-b3b0-4386-9468-1f37a94a2ab3 /172.21.0.18    rdkafka

GROUP                                TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
snuba-events-subscriptions-consumers events          0          2               3               1               rdkafka-d4d61aa1-f376-44e2-9894-b4292f66108a /172.21.0.12    rdkafka

GROUP                TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
snuba-post-processor events          0          3               3               0               rdkafka-d18612a7-57bf-4b79-b5b5-e648ee0990b3 /172.21.0.22    rdkafka

GROUP           TOPIC              PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
snuba-replacers event-replacements 0          -               0               -               rdkafka-77236c8a-74ee-4314-b83d-74ecd5f796ca /172.21.0.10    rdkafka

GROUP                       TOPIC                             PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                  HOST            CLIENT-ID
query-subscription-consumer events-subscription-results       0          -               0               -               rdkafka-6cb73715-c4a6-4046-855c-e69c4a906670 /172.21.0.23    rdkafka
query-subscription-consumer transactions-subscription-results 0          -               0               -               rdkafka-9621c000-33da-4169-b37e-b9a56e615392 /172.21.0.21    rdkafka

is this ok that i see the LAG?

romankydybets commented 3 years ago

as you can see some part were restarted

CONTAINER ID        IMAGE                                  COMMAND                  CREATED             STATUS              PORTS                          NAMES
942dbf2f7bf3        nginx:1.16                             "nginx -g 'daemon of…"   10 hours ago        Up 10 hours         0.0.0.0:80->80/tcp             sentry_onpremise_nginx_1
017e81b51333        getsentry/sentry:nightly               "/etc/sentry/entrypo…"   10 hours ago        Up 10 hours         9000/tcp                       sentry_onpremise_web_1
ad574fe7dcfb        getsentry/sentry:nightly               "/etc/sentry/entrypo…"   10 hours ago        Up 10 hours         9000/tcp                       sentry_onpremise_cron_1
fcd2043e8266        sentry-cleanup-onpremise-local         "/entrypoint.sh '0 0…"   10 hours ago        Up 10 hours         9000/tcp                       sentry_onpremise_sentry-cleanup_1
7086940c1914        getsentry/sentry:nightly               "/etc/sentry/entrypo…"   10 hours ago        Up 10 hours         9000/tcp                       sentry_onpremise_worker_1
9dd801a9acf5        getsentry/sentry:nightly               "/etc/sentry/entrypo…"   10 hours ago        Up 2 hours          9000/tcp                       sentry_onpremise_ingest-consumer_1
bf03e9f13e59        getsentry/sentry:nightly               "/etc/sentry/entrypo…"   10 hours ago        Up 2 hours          9000/tcp                       sentry_onpremise_subscription-consumer-events_1
8aa7b92f2343        getsentry/sentry:nightly               "/etc/sentry/entrypo…"   10 hours ago        Up 2 hours          9000/tcp                       sentry_onpremise_post-process-forwarder_1
89ae82308c61        getsentry/sentry:nightly               "/etc/sentry/entrypo…"   10 hours ago        Up 2 hours          9000/tcp                       sentry_onpremise_subscription-consumer-transactions_1
816ae665f835        getsentry/relay:nightly                "/bin/bash /docker-e…"   10 hours ago        Up 10 hours         3000/tcp                       sentry_onpremise_relay_1
e275a98993e3        snuba-cleanup-onpremise-local          "/entrypoint.sh '*/5…"   10 hours ago        Up 10 hours         1218/tcp                       sentry_onpremise_snuba-cleanup_1
ddf1401c1d8e        symbolicator-cleanup-onpremise-local   "/entrypoint.sh '55 …"   10 hours ago        Up 10 hours         3021/tcp                       sentry_onpremise_symbolicator-cleanup_1
0d7c88e92196        getsentry/snuba:nightly                "./docker_entrypoint…"   10 hours ago        Up 2 hours          1218/tcp                       sentry_onpremise_snuba-subscription-consumer-transactions_1
3ea96a9a788a        getsentry/snuba:nightly                "./docker_entrypoint…"   10 hours ago        Up 10 hours         1218/tcp                       sentry_onpremise_snuba-replacer_1
acd3beee950b        getsentry/snuba:nightly                "./docker_entrypoint…"   10 hours ago        Up 2 hours          1218/tcp                       sentry_onpremise_snuba-subscription-consumer-events_1
048111d0f0fb        getsentry/snuba:nightly                "./docker_entrypoint…"   10 hours ago        Up 2 hours          1218/tcp                       sentry_onpremise_snuba-sessions-consumer_1
c238a79fb6e3        getsentry/snuba:nightly                "./docker_entrypoint…"   10 hours ago        Up 2 hours          1218/tcp                       sentry_onpremise_snuba-transactions-consumer_1
9c7a4a38cd7a        getsentry/snuba:nightly                "./docker_entrypoint…"   10 hours ago        Up 10 hours         1218/tcp                       sentry_onpremise_snuba-api_1
b8c145a191a0        getsentry/snuba:nightly                "./docker_entrypoint…"   10 hours ago        Up 2 hours          1218/tcp                       sentry_onpremise_snuba-consumer_1
d0ad8b57f0a8        postgres:9.6                           "docker-entrypoint.s…"   10 hours ago        Up 10 hours         5432/tcp                       sentry_onpremise_postgres_1
499f33d08b77        tianon/exim4                           "docker-entrypoint.s…"   10 hours ago        Up 10 hours         25/tcp                         sentry_onpremise_smtp_1
71d67a66e9ee        memcached:1.5-alpine                   "docker-entrypoint.s…"   10 hours ago        Up 10 hours         11211/tcp                      sentry_onpremise_memcached_1
4c57fdbdfa11        getsentry/symbolicator:nightly         "/bin/bash /docker-e…"   10 hours ago        Up 10 hours         3021/tcp                       sentry_onpremise_symbolicator_1
78c5a6d4a088        confluentinc/cp-kafka:5.5.0            "/etc/confluent/dock…"   10 hours ago        Up 10 hours         9092/tcp                       sentry_onpremise_kafka_1
661b74a9315e        confluentinc/cp-zookeeper:5.5.0        "/etc/confluent/dock…"   10 hours ago        Up 10 hours         2181/tcp, 2888/tcp, 3888/tcp   sentry_onpremise_zookeeper_1
1181eced5010        redis:5.0-alpine                       "docker-entrypoint.s…"   10 hours ago        Up 10 hours         6379/tcp                       sentry_onpremise_redis_1
163649432dee        yandex/clickhouse-server:20.3.9.70     "/entrypoint.sh"         10 hours ago        Up 2 hours          8123/tcp, 9000/tcp, 9009/tcp   sentry_onpremise_clickhouse_1

for which containers i should sent logs?

romankydybets commented 3 years ago

this is log of clickhouse

2021-03-26T05:23:11.006547514Z Processing configuration file '/etc/clickhouse-server/config.xml'.
2021-03-26T05:23:11.013544541Z Merging configuration file '/etc/clickhouse-server/config.d/docker_related_config.xml'.
2021-03-26T05:23:11.013622293Z Merging configuration file '/etc/clickhouse-server/config.d/sentry.xml'.
2021-03-26T05:23:11.013840697Z Include not found: clickhouse_remote_servers
2021-03-26T05:23:11.013966040Z Include not found: clickhouse_compression
2021-03-26T05:23:11.053238483Z Logging information to /var/log/clickhouse-server/clickhouse-server.log
2021-03-26T05:23:11.053316674Z Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
2021-03-26T05:23:11.053322164Z Logging information to console
2021-03-26T05:23:11.075570151Z 2021.03.26 05:23:11.053610 [ 1 ] {} <Information> : Starting ClickHouse 20.3.9.70 with revision 54433
2021-03-26T05:23:11.075641422Z 2021.03.26 05:23:11.075566 [ 1 ] {} <Information> Application: starting up
2021-03-26T05:23:11.233933211Z 2021.03.26 05:23:11.233782 [ 1 ] {} <Information> StatusFile: Status file /var/lib/clickhouse/status already exists - unclean restart. Contents:
2021-03-26T05:23:11.233966041Z PID: 1
2021-03-26T05:23:11.233969891Z Started at: 2021-03-25 21:21:31
2021-03-26T05:23:11.233972671Z Revision: 54433
2021-03-26T05:23:11.233975282Z
2021-03-26T05:23:11.238894274Z Include not found: networks
2021-03-26T05:23:11.242466890Z 2021.03.26 05:23:11.242341 [ 1 ] {} <Information> Application: Uncompressed cache size was lowered to 1.90 GiB because the system has low amount of memory
2021-03-26T05:23:11.245075784Z 2021.03.26 05:23:11.244967 [ 1 ] {} <Information> Application: Mark cache size was lowered to 1.90 GiB because the system has low amount of memory
2021-03-26T05:23:11.245255208Z 2021.03.26 05:23:11.245154 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2021-03-26T05:23:11.257878882Z 2021.03.26 05:23:11.257718 [ 1 ] {} <Information> DatabaseOrdinary (system): Total 2 tables and 0 dictionaries.
2021-03-26T05:23:11.270068748Z 2021.03.26 05:23:11.269899 [ 46 ] {} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2021-03-26T05:23:12.621717028Z 2021.03.26 05:23:12.621543 [ 1 ] {} <Information> DatabaseOrdinary (system): Starting up tables.
2021-03-26T05:23:16.035634164Z 2021.03.26 05:23:16.016668 [ 46 ] {} <Warning> system.metric_log: Removing temporary directory /var/lib/clickhouse/data/system/metric_log/tmp_merge_202103_68905_79130_4054/
2021-03-26T05:23:16.196655149Z 2021.03.26 05:23:16.185576 [ 1 ] {} <Information> DatabaseOrdinary (default): Total 13 tables and 0 dictionaries.
2021-03-26T05:23:16.437823844Z 2021.03.26 05:23:16.437713 [ 1 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2021-03-26T05:23:16.450230934Z 2021.03.26 05:23:16.450115 [ 1 ] {} <Information> BackgroundSchedulePool: Create BackgroundSchedulePool with 16 threads
2021-03-26T05:23:16.450653483Z 2021.03.26 05:23:16.450593 [ 1 ] {} <Information> Application: It looks like the process has no CAP_NET_ADMIN capability, 'taskstats' performance statistics will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_net_admin=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems. It also doesn't work if you run clickhouse-server inside network namespace as it happens in some containers.
2021-03-26T05:23:16.450699814Z 2021.03.26 05:23:16.450629 [ 1 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_nice' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2021-03-26T05:23:16.491993430Z 2021.03.26 05:23:16.491857 [ 1 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2021-03-26T05:23:16.492338036Z 2021.03.26 05:23:16.492278 [ 1 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2021-03-26T05:23:16.492666964Z 2021.03.26 05:23:16.492589 [ 1 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2021-03-26T05:23:16.492981150Z 2021.03.26 05:23:16.492899 [ 1 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2021-03-26T05:23:16.493169194Z 2021.03.26 05:23:16.492999 [ 1 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2021-03-26T05:23:16.493362058Z 2021.03.26 05:23:16.493218 [ 1 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2021-03-26T05:23:16.493572023Z 2021.03.26 05:23:16.493483 [ 1 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2021-03-26T05:23:17.542819165Z 2021.03.26 05:23:17.539855 [ 1 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2021-03-26T05:23:17.548931943Z 2021.03.26 05:23:17.543445 [ 1 ] {} <Information> Application: Available RAM: 3.79 GiB; physical cores: 1; logical cores: 2.
2021-03-26T05:23:17.550582078Z 2021.03.26 05:23:17.550480 [ 1 ] {} <Information> Application: Ready for connections.
2021-03-26T05:23:19.554319136Z Include not found: clickhouse_remote_servers
2021-03-26T05:23:19.554528490Z Include not found: clickhouse_compression
2021-03-26T05:25:02.855281481Z 2021.03.26 05:25:02.855033 [ 88 ] {1e3fa251-19d1-4897-bd90-a99a92c45bc6} <Information> executeQuery: Read 3 rows, 1.43 KiB in 0.014 sec., 221 rows/sec., 105.52 KiB/sec.
2021-03-26T05:25:02.855561697Z 2021.03.26 05:25:02.855428 [ 88 ] {} <Information> TCPHandler: Processed in 0.015 sec.
2021-03-26T05:25:02.902087112Z 2021.03.26 05:25:02.901868 [ 88 ] {} <Information> TCPHandler: Done processing connection.
2021-03-26T05:32:05.896398766Z Processing configuration file '/etc/clickhouse-server/config.xml'.
2021-03-26T05:32:05.900575535Z Merging configuration file '/etc/clickhouse-server/config.d/docker_related_config.xml'.
2021-03-26T05:32:05.907892389Z Merging configuration file '/etc/clickhouse-server/config.d/sentry.xml'.
2021-03-26T05:32:05.911740540Z Include not found: clickhouse_remote_servers
2021-03-26T05:32:05.915289054Z Include not found: clickhouse_compression
2021-03-26T05:32:05.960326902Z Logging information to /var/log/clickhouse-server/clickhouse-server.log
2021-03-26T05:32:05.960576697Z Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
2021-03-26T05:32:05.960715800Z Logging information to console
2021-03-26T05:32:05.974480859Z 2021.03.26 05:32:05.961466 [ 1 ] {} <Information> : Starting ClickHouse 20.3.9.70 with revision 54433
2021-03-26T05:32:05.979528825Z 2021.03.26 05:32:05.974679 [ 1 ] {} <Information> Application: starting up
2021-03-26T05:32:06.115402334Z 2021.03.26 05:32:06.115270 [ 1 ] {} <Information> StatusFile: Status file /var/lib/clickhouse/status already exists - unclean restart. Contents:
2021-03-26T05:32:06.115450335Z PID: 1
2021-03-26T05:32:06.115458725Z Started at: 2021-03-26 05:23:11
2021-03-26T05:32:06.115464355Z Revision: 54433
2021-03-26T05:32:06.115535627Z
2021-03-26T05:32:06.120076263Z Include not found: networks
2021-03-26T05:32:06.123087775Z 2021.03.26 05:32:06.122966 [ 1 ] {} <Information> Application: Uncompressed cache size was lowered to 1.90 GiB because the system has low amount of memory
2021-03-26T05:32:06.124762341Z 2021.03.26 05:32:06.124647 [ 1 ] {} <Information> Application: Mark cache size was lowered to 1.90 GiB because the system has low amount of memory
2021-03-26T05:32:06.124939595Z 2021.03.26 05:32:06.124836 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2021-03-26T05:32:06.134881194Z 2021.03.26 05:32:06.134700 [ 1 ] {} <Information> DatabaseOrdinary (system): Total 2 tables and 0 dictionaries.
2021-03-26T05:32:06.155690791Z 2021.03.26 05:32:06.155482 [ 45 ] {} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2021-03-26T05:32:07.430289922Z 2021.03.26 05:32:07.430077 [ 1 ] {} <Information> DatabaseOrdinary (system): Starting up tables.
2021-03-26T05:32:08.335240736Z 2021.03.26 05:32:08.328756 [ 45 ] {} <Warning> system.metric_log: Removing temporary directory /var/lib/clickhouse/data/system/metric_log/tmp_merge_202103_68905_79164_4080/
2021-03-26T05:32:08.535607960Z 2021.03.26 05:32:08.523500 [ 1 ] {} <Information> DatabaseOrdinary (default): Total 13 tables and 0 dictionaries.
2021-03-26T05:32:08.795838624Z 2021.03.26 05:32:08.795651 [ 1 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2021-03-26T05:32:08.815612939Z 2021.03.26 05:32:08.800392 [ 1 ] {} <Information> BackgroundSchedulePool: Create BackgroundSchedulePool with 16 threads
2021-03-26T05:32:08.816317114Z 2021.03.26 05:32:08.816135 [ 1 ] {} <Information> Application: It looks like the process has no CAP_NET_ADMIN capability, 'taskstats' performance statistics will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_net_admin=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems. It also doesn't work if you run clickhouse-server inside network namespace as it happens in some containers.
2021-03-26T05:32:08.816535519Z 2021.03.26 05:32:08.816359 [ 1 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_nice' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2021-03-26T05:32:08.851589966Z 2021.03.26 05:32:08.851375 [ 1 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2021-03-26T05:32:08.852116227Z 2021.03.26 05:32:08.852006 [ 1 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2021-03-26T05:32:08.852540527Z 2021.03.26 05:32:08.852445 [ 1 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2021-03-26T05:32:08.852971275Z 2021.03.26 05:32:08.852872 [ 1 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2021-03-26T05:32:08.853183789Z 2021.03.26 05:32:08.853090 [ 1 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2021-03-26T05:32:08.853389695Z 2021.03.26 05:32:08.853285 [ 1 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2021-03-26T05:32:08.853564448Z 2021.03.26 05:32:08.853476 [ 1 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2021-03-26T05:32:10.972287321Z 2021.03.26 05:32:10.972040 [ 1 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2021-03-26T05:32:10.975523470Z 2021.03.26 05:32:10.972951 [ 1 ] {} <Information> Application: Available RAM: 3.79 GiB; physical cores: 1; logical cores: 2.
2021-03-26T05:32:10.975706573Z 2021.03.26 05:32:10.975573 [ 1 ] {} <Information> Application: Ready for connections.
2021-03-26T05:32:12.979569271Z Include not found: clickhouse_remote_servers
2021-03-26T05:32:12.979876118Z Include not found: clickhouse_compression
2021-03-26T05:32:29.965107166Z 2021.03.26 05:32:29.964904 [ 87 ] {32e482fd-2e58-4032-97a8-98442bb1ea70} <Information> executeQuery: Read 1 rows, 8.20 KiB in 0.023 sec., 43 rows/sec., 355.64 KiB/sec.
2021-03-26T05:32:29.965396252Z 2021.03.26 05:32:29.965340 [ 87 ] {} <Information> HTTPHandler: Done processing query
2021-03-26T05:32:31.871426679Z 2021.03.26 05:32:31.871249 [ 88 ] {0b0b23594fbc97b1896058f4358ee124} <Information> executeQuery: Read 1 rows, 36.00 B in 0.025 sec., 40 rows/sec., 1.42 KiB/sec.
2021-03-26T05:32:31.871822677Z 2021.03.26 05:32:31.871727 [ 88 ] {} <Information> TCPHandler: Processed in 0.026 sec.
2021-03-26T05:32:31.894384323Z 2021.03.26 05:32:31.894250 [ 88 ] {56e5d5eb25c50f0e903cc0025078597f} <Information> executeQuery: Read 1 rows, 46.00 B in 0.006 sec., 172 rows/sec., 7.74 KiB/sec.
2021-03-26T05:32:31.894465924Z 2021.03.26 05:32:31.894417 [ 88 ] {} <Information> TCPHandler: Processed in 0.006 sec.
2021-03-26T05:35:03.253368780Z 2021.03.26 05:35:03.248408 [ 89 ] {b6040f54-f5a6-4ede-b411-1149b4cc08b7} <Information> executeQuery: Read 4 rows, 1.90 KiB in 0.044 sec., 91 rows/sec., 43.41 KiB/sec.
2021-03-26T05:35:03.263209938Z 2021.03.26 05:35:03.263015 [ 89 ] {} <Information> TCPHandler: Processed in 0.080 sec.
2021-03-26T05:35:03.287558450Z 2021.03.26 05:35:03.287363 [ 89 ] {} <Information> TCPHandler: Done processing connection.
2021-03-26T05:40:28.172309520Z Processing configuration file '/etc/clickhouse-server/config.xml'.
2021-03-26T05:40:28.173782711Z Merging configuration file '/etc/clickhouse-server/config.d/docker_related_config.xml'.
2021-03-26T05:40:28.180234857Z Merging configuration file '/etc/clickhouse-server/config.d/sentry.xml'.
2021-03-26T05:40:28.182295830Z Include not found: clickhouse_remote_servers
2021-03-26T05:40:28.183894984Z Include not found: clickhouse_compression
2021-03-26T05:40:28.228084083Z Logging information to /var/log/clickhouse-server/clickhouse-server.log
2021-03-26T05:40:28.236819176Z Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
2021-03-26T05:40:28.237012250Z Logging information to console
2021-03-26T05:40:28.278552204Z 2021.03.26 05:40:28.237684 [ 1 ] {} <Information> : Starting ClickHouse 20.3.9.70 with revision 54433
2021-03-26T05:40:28.279315540Z 2021.03.26 05:40:28.279124 [ 1 ] {} <Information> Application: starting up
2021-03-26T05:40:28.435510564Z 2021.03.26 05:40:28.435351 [ 1 ] {} <Information> StatusFile: Status file /var/lib/clickhouse/status already exists - unclean restart. Contents:
2021-03-26T05:40:28.435540814Z PID: 1
2021-03-26T05:40:28.435544694Z Started at: 2021-03-26 05:32:06
2021-03-26T05:40:28.435547764Z Revision: 54433
2021-03-26T05:40:28.435623776Z
2021-03-26T05:40:28.440792255Z Include not found: networks
2021-03-26T05:40:28.444140235Z 2021.03.26 05:40:28.444017 [ 1 ] {} <Information> Application: Uncompressed cache size was lowered to 1.90 GiB because the system has low amount of memory
2021-03-26T05:40:28.446735390Z 2021.03.26 05:40:28.446612 [ 1 ] {} <Information> Application: Mark cache size was lowered to 1.90 GiB because the system has low amount of memory
2021-03-26T05:40:28.446921814Z 2021.03.26 05:40:28.446828 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2021-03-26T05:40:28.458226331Z 2021.03.26 05:40:28.458077 [ 1 ] {} <Information> DatabaseOrdinary (system): Total 2 tables and 0 dictionaries.
2021-03-26T05:40:28.469501208Z 2021.03.26 05:40:28.469385 [ 45 ] {} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2021-03-26T05:40:29.802784062Z 2021.03.26 05:40:29.802602 [ 1 ] {} <Information> DatabaseOrdinary (system): Starting up tables.
2021-03-26T05:40:30.407621198Z 2021.03.26 05:40:30.405046 [ 45 ] {} <Warning> system.metric_log: Removing temporary directory /var/lib/clickhouse/data/system/metric_log/tmp_merge_202103_68905_79196_4106/
2021-03-26T05:40:30.584726972Z 2021.03.26 05:40:30.584429 [ 1 ] {} <Information> DatabaseOrdinary (default): Total 13 tables and 0 dictionaries.
2021-03-26T05:40:30.917175951Z 2021.03.26 05:40:30.916986 [ 1 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2021-03-26T05:40:30.944620079Z 2021.03.26 05:40:30.944280 [ 1 ] {} <Information> BackgroundSchedulePool: Create BackgroundSchedulePool with 16 threads
2021-03-26T05:40:30.951600165Z 2021.03.26 05:40:30.948155 [ 1 ] {} <Information> Application: It looks like the process has no CAP_NET_ADMIN capability, 'taskstats' performance statistics will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_net_admin=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems. It also doesn't work if you run clickhouse-server inside network namespace as it happens in some containers.
2021-03-26T05:40:30.951769419Z 2021.03.26 05:40:30.951664 [ 1 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_nice' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2021-03-26T05:40:31.067033472Z 2021.03.26 05:40:31.066723 [ 1 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2021-03-26T05:40:31.067700456Z 2021.03.26 05:40:31.067481 [ 1 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2021-03-26T05:40:31.068280008Z 2021.03.26 05:40:31.068075 [ 1 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2021-03-26T05:40:31.068861751Z 2021.03.26 05:40:31.068653 [ 1 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2021-03-26T05:40:31.069240558Z 2021.03.26 05:40:31.068990 [ 1 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2021-03-26T05:40:31.069441942Z 2021.03.26 05:40:31.069317 [ 1 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2021-03-26T05:40:31.069723539Z 2021.03.26 05:40:31.069632 [ 1 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2021-03-26T05:40:32.652964533Z 2021.03.26 05:40:32.652647 [ 1 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2021-03-26T05:40:32.659421008Z 2021.03.26 05:40:32.656535 [ 1 ] {} <Information> Application: Available RAM: 3.79 GiB; physical cores: 1; logical cores: 2.
2021-03-26T05:40:32.675832474Z 2021.03.26 05:40:32.675550 [ 1 ] {} <Information> Application: Ready for connections.
2021-03-26T05:40:34.662501203Z Include not found: clickhouse_remote_servers
2021-03-26T05:40:34.662771228Z Include not found: clickhouse_compression
2021-03-26T05:49:13.119059041Z 2021.03.26 05:49:13.118474 [ 87 ] {e5b10c39-f20d-4aaf-a232-e3ff9cf85bf6} <Information> executeQuery: Read 4 rows, 1.90 KiB in 0.058 sec., 68 rows/sec., 32.58 KiB/sec.
2021-03-26T05:49:13.119963410Z 2021.03.26 05:49:13.119777 [ 87 ] {} <Information> TCPHandler: Processed in 0.099 sec.
2021-03-26T05:49:13.383252631Z 2021.03.26 05:49:13.382485 [ 87 ] {} <Information> TCPHandler: Done processing connection.
2021-03-26T05:52:20.015499586Z Processing configuration file '/etc/clickhouse-server/config.xml'.
2021-03-26T05:52:20.021290219Z Merging configuration file '/etc/clickhouse-server/config.d/docker_related_config.xml'.
2021-03-26T05:52:20.021599945Z Merging configuration file '/etc/clickhouse-server/config.d/sentry.xml'.
2021-03-26T05:52:20.022732498Z Include not found: clickhouse_remote_servers
2021-03-26T05:52:20.022978003Z Include not found: clickhouse_compression
2021-03-26T05:52:20.064369206Z Logging information to /var/log/clickhouse-server/clickhouse-server.log
2021-03-26T05:52:20.064756203Z Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
2021-03-26T05:52:20.065505469Z Logging information to console
2021-03-26T05:52:20.084642133Z 2021.03.26 05:52:20.065882 [ 1 ] {} <Information> : Starting ClickHouse 20.3.9.70 with revision 54433
2021-03-26T05:52:20.084824586Z 2021.03.26 05:52:20.084715 [ 1 ] {} <Information> Application: starting up
2021-03-26T05:52:20.237475110Z 2021.03.26 05:52:20.237330 [ 1 ] {} <Information> StatusFile: Status file /var/lib/clickhouse/status already exists - unclean restart. Contents:
2021-03-26T05:52:20.237500111Z PID: 1
2021-03-26T05:52:20.237503981Z Started at: 2021-03-26 05:40:28
2021-03-26T05:52:20.237506621Z Revision: 54433
2021-03-26T05:52:20.237510191Z
2021-03-26T05:52:20.242518147Z Include not found: networks
2021-03-26T05:52:20.245989199Z 2021.03.26 05:52:20.245830 [ 1 ] {} <Information> Application: Uncompressed cache size was lowered to 1.90 GiB because the system has low amount of memory
2021-03-26T05:52:20.248619915Z 2021.03.26 05:52:20.248490 [ 1 ] {} <Information> Application: Mark cache size was lowered to 1.90 GiB because the system has low amount of memory
2021-03-26T05:52:20.248853630Z 2021.03.26 05:52:20.248699 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2021-03-26T05:52:20.260739510Z 2021.03.26 05:52:20.260539 [ 1 ] {} <Information> DatabaseOrdinary (system): Total 2 tables and 0 dictionaries.
2021-03-26T05:52:20.273805445Z 2021.03.26 05:52:20.273604 [ 45 ] {} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2021-03-26T05:52:21.055805742Z 2021.03.26 05:52:21.055633 [ 1 ] {} <Information> DatabaseOrdinary (system): Starting up tables.
2021-03-26T05:52:22.965002103Z 2021.03.26 05:52:22.964459 [ 1 ] {} <Information> DatabaseOrdinary (default): Total 13 tables and 0 dictionaries.
2021-03-26T05:52:23.207836546Z 2021.03.26 05:52:23.207620 [ 1 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2021-03-26T05:52:23.223545547Z 2021.03.26 05:52:23.214103 [ 1 ] {} <Information> BackgroundSchedulePool: Create BackgroundSchedulePool with 16 threads
2021-03-26T05:52:23.224194900Z 2021.03.26 05:52:23.224073 [ 1 ] {} <Information> Application: It looks like the process has no CAP_NET_ADMIN capability, 'taskstats' performance statistics will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_net_admin=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems. It also doesn't work if you run clickhouse-server inside network namespace as it happens in some containers.
2021-03-26T05:52:23.224351044Z 2021.03.26 05:52:23.224263 [ 1 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_nice' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2021-03-26T05:52:23.258175636Z 2021.03.26 05:52:23.257622 [ 1 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2021-03-26T05:52:23.258691986Z 2021.03.26 05:52:23.258591 [ 1 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2021-03-26T05:52:23.259105495Z 2021.03.26 05:52:23.259002 [ 1 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2021-03-26T05:52:23.259530084Z 2021.03.26 05:52:23.259409 [ 1 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2021-03-26T05:52:23.259778840Z 2021.03.26 05:52:23.259669 [ 1 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2021-03-26T05:52:23.260009615Z 2021.03.26 05:52:23.259869 [ 1 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2021-03-26T05:52:23.260191028Z 2021.03.26 05:52:23.260101 [ 1 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2021-03-26T05:52:24.086177691Z 2021.03.26 05:52:24.080590 [ 1 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2021-03-26T05:52:24.086822084Z 2021.03.26 05:52:24.086718 [ 1 ] {} <Information> Application: Available RAM: 3.79 GiB; physical cores: 1; logical cores: 2.
2021-03-26T05:52:24.086987008Z 2021.03.26 05:52:24.086883 [ 1 ] {} <Information> Application: Ready for connections.
2021-03-26T05:52:26.090798401Z Include not found: clickhouse_remote_servers
2021-03-26T05:52:26.091065316Z Include not found: clickhouse_compression
2021-03-26T05:52:43.011780957Z 2021.03.26 05:52:43.011640 [ 86 ] {4736b0e0-8859-4f38-b7bd-84c61f26fb0e} <Information> executeQuery: Read 2 rows, 16.58 KiB in 0.026 sec., 77 rows/sec., 642.05 KiB/sec.
2021-03-26T05:52:43.011996901Z 2021.03.26 05:52:43.011919 [ 86 ] {} <Information> HTTPHandler: Done processing query
2021-03-26T05:52:44.937020975Z 2021.03.26 05:52:44.936895 [ 87 ] {d273400f18cde55e396552c57f1b6072} <Information> executeQuery: Read 2 rows, 72.00 B in 0.026 sec., 77 rows/sec., 2.73 KiB/sec.
2021-03-26T05:52:44.937148838Z 2021.03.26 05:52:44.937091 [ 87 ] {} <Information> TCPHandler: Processed in 0.027 sec.
2021-03-26T05:52:44.960307035Z 2021.03.26 05:52:44.960082 [ 87 ] {ecc78786ccdd73bce7bc0f44b929fe65} <Information> executeQuery: Read 2 rows, 92.00 B in 0.006 sec., 341 rows/sec., 15.35 KiB/sec.
2021-03-26T05:52:44.960635252Z 2021.03.26 05:52:44.960493 [ 87 ] {} <Information> TCPHandler: Processed in 0.007 sec.
2021-03-26T05:55:02.846192324Z 2021.03.26 05:55:02.844480 [ 88 ] {56d08b6d-eaba-4a3d-a954-5f70b88defe7} <Information> executeQuery: Read 5 rows, 2.38 KiB in 0.012 sec., 407 rows/sec., 193.87 KiB/sec.
2021-03-26T05:55:02.846660364Z 2021.03.26 05:55:02.846357 [ 88 ] {} <Information> TCPHandler: Processed in 0.021 sec.
2021-03-26T05:55:02.875996341Z 2021.03.26 05:55:02.875732 [ 88 ] {} <Information> TCPHandler: Done processing connection.
2021-03-26T06:00:02.999987468Z 2021.03.26 06:00:02.999821 [ 88 ] {c8ffef28-7533-4fa3-8f61-8590248d227e} <Information> executeQuery: Read 5 rows, 2.38 KiB in 0.002 sec., 2271 rows/sec., 1.06 MiB/sec.
2021-03-26T06:00:03.000313845Z 2021.03.26 06:00:03.000198 [ 88 ] {} <Information> TCPHandler: Processed in 0.003 sec.
2021-03-26T06:00:03.028813234Z 2021.03.26 06:00:03.028606 [ 88 ] {} <Information> TCPHandler: Done processing connection.
2021-03-26T06:05:02.156562651Z 2021.03.26 06:05:02.156397 [ 88 ] {2eed3c37-a0e2-4edd-840f-1483d610b6dc} <Information> executeQuery: Read 5 rows, 2.38 KiB in 0.002 sec., 2313 rows/sec., 1.08 MiB/sec.
2021-03-26T06:05:02.156786955Z 2021.03.26 06:05:02.156702 [ 88 ] {} <Information> TCPHandler: Processed in 0.003 sec.
2021-03-26T06:05:02.188708996Z 2021.03.26 06:05:02.188517 [ 88 ] {} <Information> TCPHandler: Done processing connection.
2021-03-26T06:10:02.331101508Z 2021.03.26 06:10:02.330950 [ 88 ] {035b6d8e-bb6b-457a-80ce-d2603862ce60} <Information> executeQuery: Read 5 rows, 2.38 KiB in 0.002 sec., 2421 rows/sec., 1.13 MiB/sec.
2021-03-26T06:10:02.331189620Z 2021.03.26 06:10:02.331097 [ 88 ] {} <Information> TCPHandler: Processed in 0.003 sec.
2021-03-26T06:10:02.360634500Z 2021.03.26 06:10:02.360453 [ 88 ] {} <Information> TCPHandler: Done processing connection.
2021-03-26T06:15:02.469759644Z 2021.03.26 06:15:02.469529 [ 88 ] {854a4500-0eb7-47d5-9b74-7a333a61e8c0} <Information> executeQuery: Read 5 rows, 2.38 KiB in 0.002 sec., 2368 rows/sec., 1.10 MiB/sec.
2021-03-26T06:15:02.469782805Z 2021.03.26 06:15:02.469700 [ 88 ] {} <Information> TCPHandler: Processed in 0.003 sec.
2021-03-26T06:15:02.498050599Z 2021.03.26 06:15:02.497880 [ 88 ] {} <Information> TCPHandler: Done processing connection.
2021-03-26T06:20:02.667940711Z 2021.03.26 06:20:02.667777 [ 88 ] {6c61296a-f138-4274-9736-26a57fddb95c} <Information> executeQuery: Read 2 rows, 976.00 B in 0.002 sec., 825 rows/sec., 393.30 KiB/sec.
2021-03-26T06:20:02.668082514Z 2021.03.26 06:20:02.667995 [ 88 ] {} <Information> TCPHandler: Processed in 0.003 sec.
2021-03-26T06:20:02.697387198Z 2021.03.26 06:20:02.697154 [ 88 ] {} <Information> TCPHandler: Done processing connection.
2021-03-26T06:25:02.808433086Z 2021.03.26 06:25:02.808253 [ 88 ] {c0958736-5433-45a8-ab57-fed6712bd6e3} <Information> executeQuery: Read 2 rows, 976.00 B in 0.002 sec., 1250 rows/sec., 595.95 KiB/sec.
2021-03-26T06:25:02.808633460Z 2021.03.26 06:25:02.808525 [ 88 ] {} <Information> TCPHandler: Processed in 0.002 sec.
2021-03-26T06:25:02.836787512Z 2021.03.26 06:25:02.836615 [ 88 ] {} <Information> TCPHandler: Done processing connection.
2021-03-26T06:30:02.962467513Z 2021.03.26 06:30:02.962234 [ 88 ] {81ec3f64-a60a-45c3-8ee5-a43acccf6fe3} <Information> executeQuery: Read 2 rows, 976.00 B in 0.020 sec., 101 rows/sec., 48.27 KiB/sec.
2021-03-26T06:30:02.962770080Z 2021.03.26 06:30:02.962669 [ 88 ] {} <Information> TCPHandler: Processed in 0.023 sec.
2021-03-26T06:30:02.998919019Z 2021.03.26 06:30:02.998652 [ 88 ] {} <Information> TCPHandler: Done processing connection.
2021-03-26T06:35:03.020151789Z 2021.03.26 06:35:03.019989 [ 88 ] {7b0b893e-0d9f-4f65-88b5-fe53e4cf5f93} <Information> executeQuery: Read 2 rows, 976.00 B in 0.002 sec., 1132 rows/sec., 539.69 KiB/sec.
2021-03-26T06:35:03.020289811Z 2021.03.26 06:35:03.020204 [ 88 ] {} <Information> TCPHandler: Processed in 0.002 sec.
2021-03-26T06:35:03.048561436Z 2021.03.26 06:35:03.048327 [ 88 ] {} <Information> TCPHandler: Done processing connection.
2021-03-26T06:40:02.159794389Z 2021.03.26 06:40:02.159606 [ 88 ] {d59cf73a-b491-45dc-8942-2af3779d2b4f} <Information> executeQuery: Read 2 rows, 976.00 B in 0.002 sec., 1331 rows/sec., 634.48 KiB/sec.
2021-03-26T06:40:02.160071735Z 2021.03.26 06:40:02.159988 [ 88 ] {} <Information> TCPHandler: Processed in 0.002 sec.
2021-03-26T06:40:02.189155496Z 2021.03.26 06:40:02.188900 [ 88 ] {} <Information> TCPHandler: Done processing connection.
2021-03-26T06:44:41.630486455Z 2021.03.26 06:44:41.630349 [ 86 ] {699edc39-de19-441f-a521-a72165cdb3b2} <Information> executeQuery: Read 1 rows, 3.60 KiB in 0.013 sec., 77 rows/sec., 279.94 KiB/sec.
2021-03-26T06:44:41.631158519Z 2021.03.26 06:44:41.631068 [ 86 ] {} <Information> HTTPHandler: Done processing query
2021-03-26T06:45:02.320008412Z 2021.03.26 06:45:02.319867 [ 88 ] {020b3dfd-ac16-4b19-a1fe-61b608cf87ad} <Information> executeQuery: Read 3 rows, 1.43 KiB in 0.002 sec., 1661 rows/sec., 791.36 KiB/sec.
2021-03-26T06:45:02.320274389Z 2021.03.26 06:45:02.320180 [ 88 ] {} <Information> TCPHandler: Processed in 0.002 sec.
2021-03-26T06:45:02.349013773Z 2021.03.26 06:45:02.348825 [ 88 ] {} <Information> TCPHandler: Done processing connection.
2021-03-26T06:50:02.470110182Z 2021.03.26 06:50:02.469943 [ 88 ] {d75cdf4c-62c8-47c5-8ecc-e73c23a4da8f} <Information> executeQuery: Read 3 rows, 1.43 KiB in 0.002 sec., 1639 rows/sec., 780.76 KiB/sec.
2021-03-26T06:50:02.470256775Z 2021.03.26 06:50:02.470170 [ 88 ] {} <Information> TCPHandler: Processed in 0.002 sec.
2021-03-26T06:50:02.501395917Z 2021.03.26 06:50:02.500607 [ 88 ] {} <Information> TCPHandler: Done processing connection.
2021-03-26T06:52:47.652773539Z 2021.03.26 06:52:47.652624 [ 87 ] {} <Information> TCPHandler: Done processing connection.
2021-03-26T06:55:02.634102286Z 2021.03.26 06:55:02.633951 [ 87 ] {da8aa691-c47a-451a-9cdf-c3074e7d7206} <Information> executeQuery: Read 3 rows, 1.43 KiB in 0.002 sec., 1707 rows/sec., 813.14 KiB/sec.
2021-03-26T06:55:02.634391962Z 2021.03.26 06:55:02.634310 [ 87 ] {} <Information> TCPHandler: Processed in 0.002 sec.
2021-03-26T06:55:02.663059505Z 2021.03.26 06:55:02.662852 [ 87 ] {} <Information> TCPHandler: Done processing connection.
2021-03-26T07:00:02.775382071Z 2021.03.26 07:00:02.775235 [ 88 ] {58694124-23c3-4d4e-938b-85e041967bb1} <Information> executeQuery: Read 3 rows, 1.43 KiB in 0.002 sec., 1629 rows/sec., 776.05 KiB/sec.
2021-03-26T07:00:02.775498123Z 2021.03.26 07:00:02.775418 [ 88 ] {} <Information> TCPHandler: Processed in 0.002 sec.
2021-03-26T07:00:02.810854506Z 2021.03.26 07:00:02.810697 [ 88 ] {} <Information> TCPHandler: Done processing connection.
2021-03-26T07:05:02.915860221Z 2021.03.26 07:05:02.915670 [ 87 ] {c1e04b1e-c58a-4552-9b20-d15688cebfa1} <Information> executeQuery: Read 3 rows, 1.43 KiB in 0.002 sec., 1705 rows/sec., 812.08 KiB/sec.
2021-03-26T07:05:02.916107117Z 2021.03.26 07:05:02.916028 [ 87 ] {} <Information> TCPHandler: Processed in 0.002 sec.
2021-03-26T07:05:02.944965894Z 2021.03.26 07:05:02.944796 [ 87 ] {} <Information> TCPHandler: Done processing connection.
2021-03-26T07:10:03.046463985Z 2021.03.26 07:10:03.046262 [ 88 ] {3b727309-8dd1-4391-b05f-f5c44e85c636} <Information> executeQuery: Read 3 rows, 1.43 KiB in 0.002 sec., 1712 rows/sec., 815.59 KiB/sec.
2021-03-26T07:10:03.046714130Z 2021.03.26 07:10:03.046629 [ 88 ] {} <Information> TCPHandler: Processed in 0.002 sec.
2021-03-26T07:10:03.074990464Z 2021.03.26 07:10:03.074794 [ 88 ] {} <Information> TCPHandler: Done processing connection.
2021-03-26T07:15:02.195566095Z 2021.03.26 07:15:02.195391 [ 87 ] {93ea85f3-d678-483e-afb5-8bb8e51330b2} <Information> executeQuery: Read 3 rows, 1.43 KiB in 0.002 sec., 1657 rows/sec., 789.19 KiB/sec.
2021-03-26T07:15:02.195657606Z 2021.03.26 07:15:02.195567 [ 87 ] {} <Information> TCPHandler: Processed in 0.002 sec.
2021-03-26T07:15:02.225778079Z 2021.03.26 07:15:02.225607 [ 87 ] {} <Information> TCPHandler: Done processing connection.
2021-03-26T07:20:02.354232114Z 2021.03.26 07:20:02.353862 [ 88 ] {e3e15841-d085-4bbf-bd65-99ff53b24e40} <Information> executeQuery: Read 3 rows, 1.43 KiB in 0.002 sec., 1643 rows/sec., 782.70 KiB/sec.
2021-03-26T07:20:02.354256895Z 2021.03.26 07:20:02.354015 [ 88 ] {} <Information> TCPHandler: Processed in 0.002 sec.
2021-03-26T07:20:02.382675052Z 2021.03.26 07:20:02.382480 [ 88 ] {} <Information> TCPHandler: Done processing connection.
2021-03-26T07:23:32.474551096Z 2021.03.26 07:23:32.474441 [ 86 ] {3e4031bd-d8bf-49e9-860c-7b590b13e0b9} <Information> executeQuery: Read 1 rows, 1.46 KiB in 0.010 sec., 100 rows/sec., 146.20 KiB/sec.
2021-03-26T07:23:32.474760630Z 2021.03.26 07:23:32.474683 [ 86 ] {} <Information> HTTPHandler: Done processing query
2021-03-26T07:25:02.494962301Z 2021.03.26 07:25:02.494795 [ 87 ] {5090d3a7-bd6c-430c-afa6-0cd685865a65} <Information> executeQuery: Read 4 rows, 1.90 KiB in 0.002 sec., 2038 rows/sec., 970.60 KiB/sec.
2021-03-26T07:25:02.495301619Z 2021.03.26 07:25:02.495159 [ 87 ] {} <Information> TCPHandler: Processed in 0.003 sec.
2021-03-26T07:25:02.524692746Z 2021.03.26 07:25:02.524423 [ 87 ] {} <Information> TCPHandler: Done processing connection.
2021-03-26T07:30:02.658267864Z 2021.03.26 07:30:02.658069 [ 88 ] {522ee133-8f54-4aa6-b982-0186025dbd8d} <Information> executeQuery: Read 4 rows, 1.90 KiB in 0.002 sec., 1634 rows/sec., 778.31 KiB/sec.
2021-03-26T07:30:02.659305076Z 2021.03.26 07:30:02.659230 [ 88 ] {} <Information> TCPHandler: Processed in 0.004 sec.
2021-03-26T07:30:02.688343316Z 2021.03.26 07:30:02.688132 [ 88 ] {} <Information> TCPHandler: Done processing connection.
2021-03-26T07:35:02.798788347Z 2021.03.26 07:35:02.798580 [ 87 ] {fe291c81-4987-4a8c-844a-faa362c19ff1} <Information> executeQuery: Read 4 rows, 1.90 KiB in 0.002 sec., 2039 rows/sec., 971.15 KiB/sec.
2021-03-26T07:35:02.799060152Z 2021.03.26 07:35:02.798934 [ 87 ] {} <Information> TCPHandler: Processed in 0.003 sec.
2021-03-26T07:35:02.827713735Z 2021.03.26 07:35:02.827427 [ 87 ] {} <Information> TCPHandler: Done processing connection.
2021-03-26T07:40:02.942787786Z 2021.03.26 07:40:02.942599 [ 88 ] {39fbbba2-b33b-4ca5-a58e-48184c09c064} <Information> executeQuery: Read 4 rows, 1.90 KiB in 0.003 sec., 1143 rows/sec., 544.49 KiB/sec.
2021-03-26T07:40:02.942992821Z 2021.03.26 07:40:02.942859 [ 88 ] {} <Information> TCPHandler: Processed in 0.004 sec.
2021-03-26T07:40:02.972828178Z 2021.03.26 07:40:02.972628 [ 88 ] {} <Information> TCPHandler: Done processing connection.
2021-03-26T07:42:05.331801359Z 2021.03.26 07:42:05.331653 [ 87 ] {a40a672ccc7cc72790b2d0b71fc37a0e} <Information> executeQuery: Read 3950 rows, 887.10 KiB in 0.020 sec., 193780 rows/sec., 42.50 MiB/sec.
2021-03-26T07:42:05.332028723Z 2021.03.26 07:42:05.331958 [ 87 ] {} <Information> TCPHandler: Processed in 0.021 sec.
2021-03-26T07:42:05.532776442Z 2021.03.26 07:42:05.532581 [ 87 ] {73bbc56dbb56738237e924ee8d774a25} <Information> executeQuery: Read 3950 rows, 81.01 KiB in 0.018 sec., 216207 rows/sec., 4.33 MiB/sec.
2021-03-26T07:42:05.533108060Z 2021.03.26 07:42:05.533011 [ 87 ] {} <Information> TCPHandler: Processed in 0.020 sec.
2021-03-26T07:42:05.583889376Z 2021.03.26 07:42:05.583745 [ 87 ] {dfcebbce41f9fba8526d140de9baf1fb} <Information> executeQuery: Read 3950 rows, 112.21 KiB in 0.014 sec., 291524 rows/sec., 8.09 MiB/sec.
2021-03-26T07:42:05.584203702Z 2021.03.26 07:42:05.584116 [ 87 ] {} <Information> TCPHandler: Processed in 0.015 sec.
2021-03-26T07:42:05.842300947Z 2021.03.26 07:42:05.842138 [ 87 ] {86e15571b97d87e5fcd04996f32ddb2a} <Information> executeQuery: Read 3950 rows, 120.30 KiB in 0.008 sec., 525254 rows/sec., 15.62 MiB/sec.
2021-03-26T07:42:05.842602763Z 2021.03.26 07:42:05.842515 [ 87 ] {} <Information> TCPHandler: Processed in 0.009 sec.
2021-03-26T07:42:05.874958403Z 2021.03.26 07:42:05.874789 [ 87 ] {5084b1d269e58ec056f5d9063b6ea036} <Information> executeQuery: Read 3950 rows, 120.30 KiB in 0.006 sec., 679505 rows/sec., 20.21 MiB/sec.
2021-03-26T07:42:05.875114166Z 2021.03.26 07:42:05.875046 [ 87 ] {} <Information> TCPHandler: Processed in 0.007 sec.
2021-03-26T07:42:05.903882471Z 2021.03.26 07:42:05.903725 [ 87 ] {c1b4a8aa9b78244ee437160e7a6ed7b3} <Information> executeQuery: Read 187 rows, 6.69 KiB in 0.005 sec., 39436 rows/sec., 1.38 MiB/sec.
2021-03-26T07:42:05.904107325Z 2021.03.26 07:42:05.904021 [ 87 ] {} <Information> TCPHandler: Processed in 0.006 sec.
2021-03-26T07:42:06.051339839Z 2021.03.26 07:42:06.051216 [ 86 ] {9e0765b8-db47-41db-9c9f-dc2de5abfbea} <Information> executeQuery: Read 1 rows, 185.00 B in 0.022 sec., 45 rows/sec., 8.27 KiB/sec.
2021-03-26T07:42:06.051676166Z 2021.03.26 07:42:06.051514 [ 86 ] {} <Information> HTTPHandler: Done processing query
2021-03-26T07:45:03.076309651Z 2021.03.26 07:45:03.076100 [ 88 ] {d056b35f-1d75-42f2-bdb7-30d8c2915c2b} <Information> executeQuery: Read 4 rows, 1.90 KiB in 0.002 sec., 2004 rows/sec., 954.11 KiB/sec.
2021-03-26T07:45:03.076597857Z 2021.03.26 07:45:03.076482 [ 88 ] {} <Information> TCPHandler: Processed in 0.003 sec.
2021-03-26T07:45:03.108145450Z 2021.03.26 07:45:03.107893 [ 88 ] {} <Information> TCPHandler: Done processing connection.
2021-03-26T07:50:02.248838418Z 2021.03.26 07:50:02.248605 [ 88 ] {17a8098d-d056-4510-aa27-018209bd1ce8} <Information> executeQuery: Read 4 rows, 1.90 KiB in 0.005 sec., 859 rows/sec., 409.11 KiB/sec.
2021-03-26T07:50:02.249345500Z 2021.03.26 07:50:02.249262 [ 88 ] {} <Information> TCPHandler: Processed in 0.006 sec.
2021-03-26T07:50:02.281935563Z 2021.03.26 07:50:02.281659 [ 88 ] {} <Information> TCPHandler: Done processing connection.

zookeeper logs

2021-03-25T21:21:50.070577727Z ===> Launching zookeeper ...
2021-03-25T21:21:57.158843212Z [2021-03-25 21:21:57,158] WARN Either no config or no quorum defined in config, running  in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
2021-03-25T21:22:00.145853598Z [2021-03-25 21:22:00,145] WARN o.e.j.s.ServletContextHandler@4d95d2a2{/,null,UNAVAILABLE} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler)
2021-03-25T21:22:00.172044943Z [2021-03-25 21:22:00,171] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler)
2021-03-26T05:12:34.319472117Z [2021-03-26 05:12:34,305] WARN CancelledKeyException causing close of session 0x1000c58760a0003 (org.apache.zookeeper.server.NIOServerCnxn)
2021-03-26T05:27:19.877884315Z [2021-03-26 05:27:19,782] WARN Unable to read additional data from client sessionid 0x1000c58760a0005, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn)
2021-03-26T05:40:20.426899732Z [2021-03-26 05:40:20,425] WARN fsync-ing the write ahead log in SyncThread:0 took 16344ms which will adversely effect operation latency. File size is 67108880 bytes. See the ZooKeeper troubleshooting guide (org.apache.zookeeper.server.persistence.FileTxnLog)
2021-03-26T05:49:12.359372637Z [2021-03-26 05:49:12,350] WARN Unable to read additional data from client sessionid 0x1000c58760a0009, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn)

kafka

2021-03-25T21:22:00.777075064Z ===> Check if Zookeeper is healthy ...
2021-03-25T21:22:06.202068708Z [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT
2021-03-25T21:22:06.202098398Z [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=78c5a6d4a088
2021-03-25T21:22:06.202102628Z [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_212
2021-03-25T21:22:06.202152019Z [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Azul Systems, Inc.
2021-03-25T21:22:06.202158069Z [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre
2021-03-25T21:22:06.202161550Z [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/etc/confluent/docker/docker-utils.jar
2021-03-25T21:22:06.202185650Z [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2021-03-25T21:22:06.202248711Z [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
2021-03-25T21:22:06.207801249Z [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA>
2021-03-25T21:22:06.207877311Z [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux
2021-03-25T21:22:06.207883291Z [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64
2021-03-25T21:22:06.207904711Z [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=4.14.219-164.354.amzn2.x86_64
2021-03-25T21:22:06.207937872Z [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=root
2021-03-25T21:22:06.207973633Z [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/root
2021-03-25T21:22:06.207998283Z [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/
2021-03-25T21:22:06.208046414Z [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.free=56MB
2021-03-25T21:22:06.208197047Z [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.max=864MB
2021-03-25T21:22:06.208235238Z [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.total=59MB
2021-03-25T21:22:06.238414570Z [main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@cc34f4d
2021-03-25T21:22:06.292490810Z [main] INFO org.apache.zookeeper.common.X509Util - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
2021-03-25T21:22:06.352095217Z [main] INFO org.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer value is 4194304 Bytes
2021-03-25T21:22:06.438350129Z [main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.request.timeout value is 0. feature enabled=
2021-03-25T21:22:06.546452158Z [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/172.21.0.8:2181. Will not attempt to authenticate using SASL (unknown error)
2021-03-25T21:22:07.328436668Z [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /172.21.0.11:46234, server: zookeeper/172.21.0.8:2181
2021-03-25T21:22:07.402409841Z [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zookeeper/172.21.0.8:2181, sessionid = 0x1000c58760a0000, negotiated timeout = 40000
2021-03-25T21:22:07.537209976Z [main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x1000c58760a0000 closed
2021-03-25T21:22:07.537469171Z [main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x1000c58760a0000
2021-03-25T21:22:07.615653543Z ===> Launching ...
2021-03-25T21:22:07.639689824Z ===> Launching kafka ...
2021-03-25T21:22:12.273286941Z [2021-03-25 21:22:12,272] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
2021-03-25T21:22:16.066237921Z [2021-03-25 21:22:16,066] WARN The package io.confluent.support.metrics.collectors.FullCollector for collecting the full set of support metrics could not be loaded, so we are reverting to anonymous, basic metric collection. If you are a Confluent customer, please refer to the Confluent Platform documentation, section Proactive Support, on how to activate full metrics collection. (io.confluent.support.metrics.KafkaSupportConfig)
2021-03-25T21:22:16.066403094Z [2021-03-25 21:22:16,066] WARN The support metrics collection feature ("Metrics") of Proactive Support is disabled. (io.confluent.support.metrics.SupportedServerStartable)
2021-03-25T21:22:20.524611643Z [2021-03-25 21:22:20,524] INFO Starting the log cleaner (kafka.log.LogCleaner)
2021-03-25T21:22:20.732986652Z [2021-03-25 21:22:20,732] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner)
2021-03-25T21:22:22.074979157Z [2021-03-25 21:22:22,074] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
2021-03-25T21:22:22.173617394Z [2021-03-25 21:22:22,173] INFO [SocketServer brokerId=1001] Created data-plane acceptor and processors for endpoint : EndPoint(0.0.0.0,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.network.SocketServer)
2021-03-25T21:22:22.175234618Z [2021-03-25 21:22:22,175] INFO [SocketServer brokerId=1001] Started 1 acceptor threads for data-plane (kafka.network.SocketServer)
2021-03-25T21:22:22.474637861Z [2021-03-25 21:22:22,474] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
2021-03-25T21:22:22.507099412Z [2021-03-25 21:22:22,506] INFO Stat of the created znode at /brokers/ids/1001 is: 109,109,1616707342500,1616707342500,1,0,0,72071168114950145,180,0,109
2021-03-25T21:22:22.507157743Z  (kafka.zk.KafkaZkClient)
2021-03-25T21:22:22.513050798Z [2021-03-25 21:22:22,512] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: ArrayBuffer(EndPoint(kafka,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 109 (kafka.zk.KafkaZkClient)
2021-03-25T21:22:23.394910762Z [2021-03-25 21:22:23,394] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
2021-03-25T21:22:23.684783023Z [2021-03-25 21:22:23,684] INFO [SocketServer brokerId=1001] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)
2021-03-25T21:22:24.016997484Z [2021-03-25 21:22:24,016] INFO Creating topic events-subscription-results with configuration {} and initial partition assignment Map(0 -> ArrayBuffer(1001)) (kafka.zk.AdminZkClient)
2021-03-25T21:22:24.027785904Z [2021-03-25 21:22:24,017] INFO Creating topic transactions-subscription-results with configuration {} and initial partition assignment Map(0 -> ArrayBuffer(1001)) (kafka.zk.AdminZkClient)
2021-03-25T23:43:16.791553724Z [2021-03-25 23:43:16,600] WARN Client session timed out, have not heard from server in 15241ms for sessionid 0x1000c58760a0001 (org.apache.zookeeper.ClientCnxn)
2021-03-25T23:43:18.501901996Z [2021-03-25 23:43:18,500] WARN Unable to reconnect to ZooKeeper service, session 0x1000c58760a0001 has expired (org.apache.zookeeper.ClientCnxn)
2021-03-25T23:43:18.521779444Z [2021-03-25 23:43:18,521] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
2021-03-25T23:43:18.530255422Z [2021-03-25 23:43:18,530] INFO Stat of the created znode at /brokers/ids/1001 is: 127,127,1616715798527,1616715798527,1,0,0,72071168114950146,180,0,127
2021-03-25T23:43:18.530276403Z  (kafka.zk.KafkaZkClient)
2021-03-25T23:43:18.530409735Z [2021-03-25 23:43:18,530] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: ArrayBuffer(EndPoint(kafka,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 127 (kafka.zk.KafkaZkClient)
2021-03-26T05:08:25.243590093Z [2021-03-26 05:08:24,290] WARN Client session timed out, have not heard from server in 458640ms for sessionid 0x1000c58760a0002 (org.apache.zookeeper.ClientCnxn)
2021-03-26T05:08:26.398932980Z [2021-03-26 05:08:26,398] WARN Unable to reconnect to ZooKeeper service, session 0x1000c58760a0002 has expired (org.apache.zookeeper.ClientCnxn)
2021-03-26T05:08:26.412385192Z [2021-03-26 05:08:26,412] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
2021-03-26T05:08:26.440533970Z [2021-03-26 05:08:26,440] INFO Stat of the created znode at /brokers/ids/1001 is: 132,132,1616735306425,1616735306425,1,0,0,72071168114950147,180,0,132
2021-03-26T05:08:26.440566940Z  (kafka.zk.KafkaZkClient)
2021-03-26T05:08:26.440823256Z [2021-03-26 05:08:26,440] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: ArrayBuffer(EndPoint(kafka,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 132 (kafka.zk.KafkaZkClient)
2021-03-26T05:12:36.191148122Z [2021-03-26 05:12:36,190] WARN Unable to reconnect to ZooKeeper service, session 0x1000c58760a0003 has expired (org.apache.zookeeper.ClientCnxn)
2021-03-26T05:12:36.198588808Z [2021-03-26 05:12:36,198] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
2021-03-26T05:12:36.205986413Z [2021-03-26 05:12:36,205] INFO Stat of the created znode at /brokers/ids/1001 is: 137,137,1616735556202,1616735556202,1,0,0,72071168114950148,180,0,137
2021-03-26T05:12:36.206022503Z  (kafka.zk.KafkaZkClient)
2021-03-26T05:12:36.206139656Z [2021-03-26 05:12:36,206] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: ArrayBuffer(EndPoint(kafka,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 137 (kafka.zk.KafkaZkClient)
2021-03-26T05:23:03.947255022Z [2021-03-26 05:15:53,179] WARN Client session timed out, have not heard from server in 12144ms for sessionid 0x1000c58760a0004 (org.apache.zookeeper.ClientCnxn)
2021-03-26T05:23:05.833949957Z [2021-03-26 05:23:05,832] WARN Unable to reconnect to ZooKeeper service, session 0x1000c58760a0004 has expired (org.apache.zookeeper.ClientCnxn)
2021-03-26T05:23:05.858546472Z [2021-03-26 05:23:05,856] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
2021-03-26T05:23:05.865193342Z [2021-03-26 05:23:05,864] INFO Stat of the created znode at /brokers/ids/1001 is: 142,142,1616736185860,1616736185860,1,0,0,72071168114950149,180,0,142
2021-03-26T05:23:05.865218542Z  (kafka.zk.KafkaZkClient)
2021-03-26T05:23:05.865221922Z [2021-03-26 05:23:05,864] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: ArrayBuffer(EndPoint(kafka,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 142 (kafka.zk.KafkaZkClient)
2021-03-26T05:27:19.479142071Z [2021-03-26 05:27:18,767] WARN Client session timed out, have not heard from server in 12004ms for sessionid 0x1000c58760a0005 (org.apache.zookeeper.ClientCnxn)
2021-03-26T05:27:24.151725827Z [2021-03-26 05:27:24,147] WARN Unable to reconnect to ZooKeeper service, session 0x1000c58760a0005 has expired (org.apache.zookeeper.ClientCnxn)
2021-03-26T05:27:24.159633253Z [2021-03-26 05:27:24,154] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
2021-03-26T05:27:24.163852522Z [2021-03-26 05:27:24,163] INFO Stat of the created znode at /brokers/ids/1001 is: 147,147,1616736444162,1616736444162,1,0,0,72071168114950150,180,0,147
2021-03-26T05:27:24.163869943Z  (kafka.zk.KafkaZkClient)
2021-03-26T05:27:24.164070977Z [2021-03-26 05:27:24,163] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: ArrayBuffer(EndPoint(kafka,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 147 (kafka.zk.KafkaZkClient)
2021-03-26T05:29:22.426303127Z [2021-03-26 05:28:42,364] WARN Client session timed out, have not heard from server in 12048ms for sessionid 0x1000c58760a0006 (org.apache.zookeeper.ClientCnxn)
2021-03-26T05:31:48.445179906Z [2021-03-26 05:31:48,444] WARN Unable to reconnect to ZooKeeper service, session 0x1000c58760a0006 has expired (org.apache.zookeeper.ClientCnxn)
2021-03-26T05:31:59.407559989Z [2021-03-26 05:31:59,406] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
2021-03-26T05:31:59.464566989Z [2021-03-26 05:31:59,464] INFO Stat of the created znode at /brokers/ids/1001 is: 152,152,1616736719454,1616736719454,1,0,0,72071168114950151,180,0,152
2021-03-26T05:31:59.464606309Z  (kafka.zk.KafkaZkClient)
2021-03-26T05:31:59.467237155Z [2021-03-26 05:31:59,467] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: ArrayBuffer(EndPoint(kafka,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 152 (kafka.zk.KafkaZkClient)
2021-03-26T05:40:21.253721585Z [2021-03-26 05:36:14,602] WARN Client session timed out, have not heard from server in 14364ms for sessionid 0x1000c58760a0007 (org.apache.zookeeper.ClientCnxn)
2021-03-26T05:40:22.687578022Z [2021-03-26 05:40:22,685] WARN Unable to reconnect to ZooKeeper service, session 0x1000c58760a0007 has expired (org.apache.zookeeper.ClientCnxn)
2021-03-26T05:40:22.695902037Z [2021-03-26 05:40:22,695] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
2021-03-26T05:40:22.715530150Z [2021-03-26 05:40:22,712] INFO Stat of the created znode at /brokers/ids/1001 is: 157,157,1616737222709,1616737222709,1,0,0,72071168114950152,180,0,157
2021-03-26T05:40:22.715555151Z  (kafka.zk.KafkaZkClient)
2021-03-26T05:40:22.715558301Z [2021-03-26 05:40:22,712] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: ArrayBuffer(EndPoint(kafka,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 157 (kafka.zk.KafkaZkClient)
2021-03-26T05:44:28.259279297Z [2021-03-26 05:44:00,631] WARN Client session timed out, have not heard from server in 21091ms for sessionid 0x1000c58760a0008 (org.apache.zookeeper.ClientCnxn)
2021-03-26T05:44:29.937061039Z [2021-03-26 05:44:29,936] WARN Unable to reconnect to ZooKeeper service, session 0x1000c58760a0008 has expired (org.apache.zookeeper.ClientCnxn)
2021-03-26T05:44:29.964801881Z [2021-03-26 05:44:29,962] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
2021-03-26T05:44:29.979483230Z [2021-03-26 05:44:29,977] INFO Stat of the created znode at /brokers/ids/1001 is: 162,162,1616737469975,1616737469975,1,0,0,72071168114950153,180,0,162
2021-03-26T05:44:29.979508660Z  (kafka.zk.KafkaZkClient)
2021-03-26T05:44:29.979513090Z [2021-03-26 05:44:29,977] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: ArrayBuffer(EndPoint(kafka,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 162 (kafka.zk.KafkaZkClient)
2021-03-26T05:45:54.025399007Z [2021-03-26 05:45:52,639] WARN Client session timed out, have not heard from server in 12047ms for sessionid 0x1000c58760a0009 (org.apache.zookeeper.ClientCnxn)
2021-03-26T05:49:14.943802112Z [2021-03-26 05:49:14,936] WARN Unable to reconnect to ZooKeeper service, session 0x1000c58760a0009 has expired (org.apache.zookeeper.ClientCnxn)
2021-03-26T05:49:15.720506549Z [2021-03-26 05:49:15,687] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
2021-03-26T05:49:15.721948959Z [2021-03-26 05:49:15,719] INFO Stat of the created znode at /brokers/ids/1001 is: 168,168,1616737755714,1616737755714,1,0,0,72071168114950154,180,0,168
2021-03-26T05:49:15.721956400Z  (kafka.zk.KafkaZkClient)
2021-03-26T05:49:15.721959380Z [2021-03-26 05:49:15,721] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: ArrayBuffer(EndPoint(kafka,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 168 (kafka.zk.KafkaZkClient)
2021-03-26T05:52:12.067008503Z [2021-03-26 05:52:12,066] WARN Client session timed out, have not heard from server in 176114ms for sessionid 0x1000c58760a000a (org.apache.zookeeper.ClientCnxn)
2021-03-26T05:52:13.972235617Z [2021-03-26 05:52:13,972] WARN Unable to reconnect to ZooKeeper service, session 0x1000c58760a000a has expired (org.apache.zookeeper.ClientCnxn)
2021-03-26T05:52:14.095192296Z [2021-03-26 05:52:14,080] ERROR [Controller id=1001] Error while electing or becoming controller on broker 1001. Trigger controller movement immediately (kafka.controller.KafkaController)
2021-03-26T05:52:14.095218766Z kafka.zookeeper.ZooKeeperClientExpiredException: Session expired either before or while waiting for connection
2021-03-26T05:52:14.095222516Z  at kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:275)
2021-03-26T05:52:14.095225587Z  at kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:263)
2021-03-26T05:52:14.095228317Z  at kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$1(ZooKeeperClient.scala:257)
2021-03-26T05:52:14.095230997Z  at kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:257)
2021-03-26T05:52:14.095233467Z  at kafka.zk.KafkaZkClient.retryRequestsUntilConnected(KafkaZkClient.scala:1712)
2021-03-26T05:52:14.095235997Z  at kafka.zk.KafkaZkClient.getFullReplicaAssignmentForTopics(KafkaZkClient.scala:595)
2021-03-26T05:52:14.095238727Z  at kafka.controller.KafkaController.initializeControllerContext(KafkaController.scala:792)
2021-03-26T05:52:14.095241347Z  at kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:292)
2021-03-26T05:52:14.095243907Z  at kafka.controller.KafkaController.elect(KafkaController.scala:1404)
2021-03-26T05:52:14.095246497Z  at kafka.controller.KafkaController.processReelect(KafkaController.scala:1855)
2021-03-26T05:52:14.095249067Z  at kafka.controller.KafkaController.processRegisterBrokerAndReelect(KafkaController.scala:1860)
2021-03-26T05:52:14.095251767Z  at kafka.controller.KafkaController.process(KafkaController.scala:1902)
2021-03-26T05:52:14.095254397Z  at kafka.controller.QueuedEvent.process(ControllerEventManager.scala:53)
2021-03-26T05:52:14.095257127Z  at kafka.controller.ControllerEventManager$ControllerEventThread.process$1(ControllerEventManager.scala:136)
2021-03-26T05:52:14.095260087Z  at kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:139)
2021-03-26T05:52:14.095262847Z  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
2021-03-26T05:52:14.095265457Z  at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
2021-03-26T05:52:14.095267987Z  at kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:139)
2021-03-26T05:52:14.095270507Z  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
2021-03-26T05:52:14.192065596Z [2021-03-26 05:52:14,189] ERROR [Controller id=1001] Error processing event RegisterBrokerAndReelect (kafka.controller.KafkaController)
2021-03-26T05:52:14.192085276Z org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /controller
2021-03-26T05:52:14.192089046Z  at org.apache.zookeeper.KeeperException.create(KeeperException.java:134)
2021-03-26T05:52:14.192100417Z  at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
2021-03-26T05:52:14.192103317Z  at kafka.zookeeper.AsyncResponse.resultException(ZooKeeperClient.scala:562)
2021-03-26T05:52:14.192106107Z  at kafka.zk.KafkaZkClient.getControllerId(KafkaZkClient.scala:1052)
2021-03-26T05:52:14.192108637Z  at kafka.controller.KafkaController.triggerControllerMove(KafkaController.scala:1358)
2021-03-26T05:52:14.192111307Z  at kafka.controller.KafkaController.elect(KafkaController.scala:1417)
2021-03-26T05:52:14.192113997Z  at kafka.controller.KafkaController.processReelect(KafkaController.scala:1855)
2021-03-26T05:52:14.192116787Z  at kafka.controller.KafkaController.processRegisterBrokerAndReelect(KafkaController.scala:1860)
2021-03-26T05:52:14.192119437Z  at kafka.controller.KafkaController.process(KafkaController.scala:1902)
2021-03-26T05:52:14.192121957Z  at kafka.controller.QueuedEvent.process(ControllerEventManager.scala:53)
2021-03-26T05:52:14.192124447Z  at kafka.controller.ControllerEventManager$ControllerEventThread.process$1(ControllerEventManager.scala:136)
2021-03-26T05:52:14.192127277Z  at kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:139)
2021-03-26T05:52:14.192129997Z  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
2021-03-26T05:52:14.192132617Z  at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
2021-03-26T05:52:14.192135567Z  at kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:139)
2021-03-26T05:52:14.192138157Z  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
2021-03-26T05:52:14.241639660Z [2021-03-26 05:52:14,206] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
2021-03-26T05:52:14.253669443Z [2021-03-26 05:52:14,250] INFO Stat of the created znode at /brokers/ids/1001 is: 172,172,1616737934244,1616737934244,1,0,0,72071168114950155,180,0,172
2021-03-26T05:52:14.253686773Z  (kafka.zk.KafkaZkClient)
2021-03-26T05:52:14.253714534Z [2021-03-26 05:52:14,250] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: ArrayBuffer(EndPoint(kafka,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 172 (kafka.zk.KafkaZkClient)

docker logs -ft sentry_onpremise_snuba-consumer_1

2021-03-25T21:20:32.985647286Z 2021-03-25 21:20:32,985 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 0}
2021-03-25T21:20:35.924537193Z 2021-03-25 21:20:35,924 Partitions revoked: [Partition(topic=Topic(name='events'), index=0)]
2021-03-25T21:20:36.250109391Z 2021-03-25 21:20:36,249 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 0}
2021-03-25T21:20:41.252152476Z 2021-03-25 21:20:41,251 Partitions revoked: [Partition(topic=Topic(name='events'), index=0)]
2021-03-25T21:22:07.523821592Z %3|1616707327.523|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.21.0.11:9092 failed: Connection refused (after 7ms in state CONNECT)
2021-03-25T21:22:07.527591572Z %3|1616707327.527|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.21.0.11:9092 failed: Connection refused (after 2ms in state CONNECT)
2021-03-25T21:22:08.524512822Z %3|1616707328.524|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.21.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
2021-03-25T21:22:08.524698995Z %3|1616707328.524|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.21.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
2021-03-25T21:22:27.965511201Z 2021-03-25 21:22:27,963 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 0}
2021-03-25T21:33:45.460634551Z 2021-03-25 21:33:45,460 Completed processing <Batch: 1 message, open for 1.06 seconds>.
2021-03-25T21:34:11.504775873Z 2021-03-25 21:34:11,504 Completed processing <Batch: 1 message, open for 1.02 seconds>.
2021-03-25T21:34:16.970138461Z 2021-03-25 21:34:16,969 Completed processing <Batch: 1 message, open for 1.03 seconds>.
2021-03-25T21:37:10.469698919Z 2021-03-25 21:37:10,469 Completed processing <Batch: 1 message, open for 1.02 seconds>.
2021-03-25T21:38:18.877704314Z 2021-03-25 21:38:18,877 Completed processing <Batch: 1 message, open for 1.04 seconds>.
2021-03-25T21:41:40.494202472Z 2021-03-25 21:41:40,494 Completed processing <Batch: 1 message, open for 1.02 seconds>.
2021-03-25T21:41:50.534175104Z 2021-03-25 21:41:50,533 Completed processing <Batch: 1 message, open for 1.02 seconds>.
2021-03-25T21:59:51.858921046Z 2021-03-25 21:59:51,858 Completed processing <Batch: 1 message, open for 1.04 seconds>.
2021-03-25T22:01:27.429680001Z 2021-03-25 22:01:27,429 Completed processing <Batch: 1 message, open for 1.02 seconds>.
2021-03-25T22:01:28.911736493Z 2021-03-25 22:01:28,911 Completed processing <Batch: 1 message, open for 1.02 seconds>.
2021-03-25T22:02:06.971820906Z 2021-03-25 22:02:06,971 Completed processing <Batch: 1 message, open for 1.02 seconds>.
2021-03-25T22:02:23.723575807Z 2021-03-25 22:02:23,723 Completed processing <Batch: 1 message, open for 1.02 seconds>.
2021-03-25T22:02:38.356088697Z 2021-03-25 22:02:38,355 Completed processing <Batch: 1 message, open for 1.02 seconds>.
2021-03-25T22:03:48.827429403Z 2021-03-25 22:03:48,827 Completed processing <Batch: 1 message, open for 1.02 seconds>.
2021-03-25T22:04:00.555600220Z 2021-03-25 22:04:00,555 Completed processing <Batch: 1 message, open for 1.02 seconds>.
2021-03-25T22:04:43.277847387Z 2021-03-25 22:04:43,277 Completed processing <Batch: 1 message, open for 1.02 seconds>.
2021-03-25T22:05:01.423201178Z 2021-03-25 22:05:01,423 Completed processing <Batch: 1 message, open for 1.02 seconds>.
2021-03-25T22:05:14.757125921Z 2021-03-25 22:05:14,756 Completed processing <Batch: 1 message, open for 1.02 seconds>.
2021-03-25T22:05:34.740953054Z 2021-03-25 22:05:34,740 Completed processing <Batch: 1 message, open for 1.02 seconds>.
2021-03-25T22:05:43.079742366Z 2021-03-25 22:05:43,079 Completed processing <Batch: 1 message, open for 1.02 seconds>.
2021-03-25T22:07:14.824407762Z 2021-03-25 22:07:14,824 Completed processing <Batch: 1 message, open for 1.02 seconds>.
2021-03-25T22:07:38.284871138Z 2021-03-25 22:07:38,284 Completed processing <Batch: 1 message, open for 1.02 seconds>.
2021-03-25T22:07:51.524700496Z 2021-03-25 22:07:51,524 Completed processing <Batch: 1 message, open for 1.02 seconds>.
2021-03-25T22:08:01.655262374Z 2021-03-25 22:08:01,655 Completed processing <Batch: 1 message, open for 1.02 seconds>.
2021-03-25T22:10:23.073317556Z 2021-03-25 22:10:23,073 Completed processing <Batch: 1 message, open for 1.02 seconds>.
2021-03-25T22:10:33.532220236Z 2021-03-25 22:10:33,532 Completed processing <Batch: 1 message, open for 1.02 seconds>.
2021-03-25T22:18:05.466306966Z 2021-03-25 22:18:05,466 Completed processing <Batch: 1 message, open for 1.03 seconds>.
2021-03-25T23:35:37.430928310Z 2021-03-25 23:35:37,430 Completed processing <Batch: 1 message, open for 1.02 seconds>.
2021-03-25T23:42:53.544863379Z %4|1616715773.533|SESSTMOUT|rdkafka#consumer-2| [thrd:main]: Consumer group session timed out (in join-state started) after 10802 ms without a successful response from the group coordinator (broker 1001, last error was Success): revoking assignment and rejoining group
2021-03-25T23:42:54.513111231Z %5|1616715774.499|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out HeartbeatRequest in flight (after 10802ms, timeout #0)
2021-03-25T23:42:54.668851462Z %4|1616715774.500|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-25T23:42:54.668855792Z %3|1616715774.500|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: 1 request(s) timed out: disconnect (after 8429969ms in state UP)
2021-03-25T23:43:00.858684128Z 2021-03-25 23:42:53,937 Partitions revoked: [Partition(topic=Topic(name='events'), index=0)]
2021-03-25T23:43:07.834145616Z %5|1616715787.691|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out ApiVersionRequest in flight (after 10375ms, timeout #0)
2021-03-25T23:43:08.075477293Z %4|1616715787.711|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: ApiVersionRequest failed: Local: Timed out: probably due to broker version < 0.10 (see api.version.request configuration) (after 10415ms in state APIVERSION_QUERY)
2021-03-25T23:43:08.371470562Z %4|1616715788.231|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-25T23:43:26.860717797Z 2021-03-25 23:43:26,856 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 28}
2021-03-26T04:59:11.803476720Z %4|1616734751.424|SESSTMOUT|rdkafka#consumer-2| [thrd:main]: Consumer group session timed out (in join-state started) after 10766 ms without a successful response from the group coordinator (broker 1001, last error was Success): revoking assignment and rejoining group
2021-03-26T04:59:14.541540975Z 2021-03-26 04:59:14,109 Partitions revoked: [Partition(topic=Topic(name='events'), index=0)]
2021-03-26T04:59:15.880599556Z %5|1616734755.671|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out HeartbeatRequest in flight (after 10071ms, timeout #0)
2021-03-26T04:59:15.889196816Z %4|1616734755.687|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T04:59:17.215796427Z %3|1616734757.211|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: 2 request(s) timed out: disconnect (after 18958864ms in state UP)
2021-03-26T04:59:44.908145222Z %5|1616734784.615|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out ApiVersionRequest in flight (after 10711ms, timeout #0)
2021-03-26T05:00:38.477708450Z %4|1616734823.553|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: ApiVersionRequest failed: Local: Timed out: probably due to broker version < 0.10 (see api.version.request configuration) (after 48375ms in state APIVERSION_QUERY)
2021-03-26T05:01:42.404531544Z %4|1616734849.636|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:02:02.772528120Z %5|1616734863.299|REQTMOUT|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out FetchRequest in flight (after 98913ms, timeout #0)
2021-03-26T05:02:02.773110542Z %5|1616734889.018|REQTMOUT|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out MetadataRequest in flight (after 79075ms, timeout #1)
2021-03-26T05:02:03.071960848Z %5|1616734915.220|REQTMOUT|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out FindCoordinatorRequest in flight (after 79075ms, timeout #2)
2021-03-26T05:02:03.073397748Z %4|1616734915.267|REQTMOUT|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out 3 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:03:06.297096487Z %3|1616734925.659|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: 3 request(s) timed out: disconnect (after 27581565ms in state UP)
2021-03-26T05:03:15.392583200Z %3|1616734990.623|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: Failed to resolve 'kafka:9092': Temporary failure in name resolution (after 115463ms in state CONNECT)
2021-03-26T05:03:41.620470993Z %5|1616735021.310|REQTMOUT|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out MetadataRequest in flight (after 64199ms, timeout #0)
2021-03-26T05:03:41.742214578Z %4|1616735021.310|REQTMOUT|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:03:41.742605315Z %3|1616735021.423|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: 1 request(s) timed out: disconnect (after 27677478ms in state UP)
2021-03-26T05:03:53.223514392Z %5|1616735016.569|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out ApiVersionRequest in flight (after 14627ms, timeout #0)
2021-03-26T05:03:53.229564638Z %4|1616735032.804|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: ApiVersionRequest failed: Local: Timed out: probably due to broker version < 0.10 (see api.version.request configuration) (after 31322ms in state APIVERSION_QUERY)
2021-03-26T05:04:03.825265568Z %4|1616735043.231|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:04:43.637365521Z %5|1616735083.385|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out ApiVersionRequest in flight (after 13058ms, timeout #0)
2021-03-26T05:04:43.648721729Z %4|1616735083.510|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: ApiVersionRequest failed: Local: Timed out: probably due to broker version < 0.10 (see api.version.request configuration) (after 13316ms in state APIVERSION_QUERY, 1 identical error(s) suppressed)
2021-03-26T05:04:43.649026535Z %4|1616735083.510|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:04:48.201740289Z %4|1616735088.082|REQTMOUT|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out 0 in-flight, 0 retry-queued, 2 out-queue, 0 partially-sent requests
2021-03-26T05:04:59.011484437Z %3|1616735098.542|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: 2 request(s) timed out: disconnect (after 10587ms in state APIVERSION_QUERY)
2021-03-26T05:05:09.980230579Z %5|1616735107.587|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out ApiVersionRequest in flight (after 6048ms, timeout #0)
2021-03-26T05:05:15.249698786Z %4|1616735114.852|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: ApiVersionRequest failed: Local: Timed out: probably due to broker version < 0.10 (see api.version.request configuration) (after 15686ms in state APIVERSION_QUERY, 1 identical error(s) suppressed)
2021-03-26T05:05:15.256966887Z %4|1616735114.853|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:05:24.896330155Z %5|1616735124.889|REQTMOUT|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out ApiVersionRequest in flight (after 13542ms, timeout #0)
2021-03-26T05:05:25.019505928Z %4|1616735124.995|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: ApiVersionRequest failed: Local: Timed out: probably due to broker version < 0.10 (see api.version.request configuration) (after 13755ms in state APIVERSION_QUERY)
2021-03-26T05:05:25.019542628Z %4|1616735124.995|REQTMOUT|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:05:54.537513181Z 2021-03-26 05:05:26,629 Caught ConsumerError('KafkaError{code=_TIMED_OUT,val=-185,str="FindCoordinator response error: Local: Timed out"}'), shutting down...
2021-03-26T05:06:43.063515154Z Traceback (most recent call last):
2021-03-26T05:06:43.065683459Z   File "/usr/local/bin/snuba", line 33, in <module>
2021-03-26T05:06:54.207631968Z %4|1616735214.206|REQTMOUT|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out 0 in-flight, 0 retry-queued, 1 out-queue, 0 partially-sent requests
2021-03-26T05:06:59.723620442Z     sys.exit(load_entry_point('snuba', 'console_scripts', 'snuba')())
2021-03-26T05:06:59.740155938Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 829, in __call__
2021-03-26T05:07:20.028944947Z     return self.main(*args, **kwargs)
2021-03-26T05:07:20.541428853Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 782, in main
2021-03-26T05:07:43.711969607Z     rv = self.invoke(ctx)
2021-03-26T05:07:47.485573365Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
2021-03-26T05:08:02.743458870Z %4|1616735280.420|REQTMOUT|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out 0 in-flight, 0 retry-queued, 1 out-queue, 0 partially-sent requests
2021-03-26T05:08:10.760711134Z     return _process_result(sub_ctx.command.invoke(sub_ctx))
2021-03-26T05:08:10.763185725Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
2021-03-26T05:08:10.827508278Z %3|1616735290.823|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Connect to ipv4#172.21.0.11:9092 failed: Connection timed out (after 160159ms in state CONNECT)
2021-03-26T05:08:18.465044179Z     return ctx.invoke(self.callback, **ctx.params)
2021-03-26T05:08:18.468120143Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 610, in invoke
2021-03-26T05:08:24.038443683Z     return callback(*args, **kwargs)
2021-03-26T05:08:24.041354404Z   File "/usr/src/snuba/snuba/cli/consumer.py", line 161, in consumer
2021-03-26T05:08:24.041359644Z     consumer.run()
2021-03-26T05:08:24.041362544Z   File "/usr/src/snuba/snuba/utils/streams/processing/processor.py", line 112, in run
2021-03-26T05:08:24.043969919Z     self._run_once()
2021-03-26T05:08:24.043982149Z   File "/usr/src/snuba/snuba/utils/streams/processing/processor.py", line 142, in _run_once
2021-03-26T05:08:24.044240034Z     self.__message = self.__consumer.poll(timeout=1.0)
2021-03-26T05:08:24.044319046Z   File "/usr/src/snuba/snuba/utils/streams/backends/kafka.py", line 773, in poll
2021-03-26T05:08:24.045750876Z     return super().poll(timeout)
2021-03-26T05:08:24.045760266Z   File "/usr/src/snuba/snuba/utils/streams/backends/kafka.py", line 405, in poll
2021-03-26T05:08:24.046401859Z     raise ConsumerError(str(error))
2021-03-26T05:08:24.047471122Z snuba.utils.streams.backends.abstract.ConsumerError: KafkaError{code=_TIMED_OUT,val=-185,str="FindCoordinator response error: Local: Timed out"}
2021-03-26T05:08:52.187918785Z 2021-03-26 05:08:52,187 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 28}
2021-03-26T05:12:18.825590585Z %4|1616735538.686|SESSTMOUT|rdkafka#consumer-2| [thrd:main]: Consumer group session timed out (in join-state started) after 11299 ms without a successful response from the group coordinator (broker 1001, last error was Success): revoking assignment and rejoining group
2021-03-26T05:12:23.441411498Z 2021-03-26 05:12:19,731 Partitions revoked: [Partition(topic=Topic(name='events'), index=0)]
2021-03-26T05:12:25.467480590Z %5|1616735542.667|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out HeartbeatRequest in flight (after 10625ms, timeout #0)
2021-03-26T05:12:25.476426106Z %4|1616735545.276|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:12:28.991180118Z %3|1616735547.681|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: 1 request(s) timed out: disconnect (after 214706ms in state UP)
2021-03-26T05:12:44.323792386Z 2021-03-26 05:12:44,323 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 28}
2021-03-26T05:15:08.385605039Z %4|1616735669.337|SESSTMOUT|rdkafka#consumer-2| [thrd:main]: Consumer group session timed out (in join-state started) after 47917 ms without a successful response from the group coordinator (broker 1001, last error was Success): revoking assignment and rejoining group
2021-03-26T05:15:30.289739118Z %5|1616735726.267|REQTMOUT|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out FetchRequest in flight (after 68489ms, timeout #0)
2021-03-26T05:15:30.291415943Z %4|1616735726.267|REQTMOUT|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:15:30.291420604Z %3|1616735726.268|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: 1 request(s) timed out: disconnect (after 395708ms in state UP)
2021-03-26T05:15:30.291423314Z %5|1616735726.345|REQTMOUT|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out MetadataRequest in flight (after 60323ms, timeout #0)
2021-03-26T05:15:30.291426014Z %4|1616735726.345|REQTMOUT|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:15:30.291428764Z %3|1616735726.346|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: 1 request(s) timed out: disconnect (after 395786ms in state UP)
2021-03-26T05:17:13.747878324Z %5|1616735819.789|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out MetadataRequest in flight (after 93442ms, timeout #0)
2021-03-26T05:17:15.178628936Z %4|1616735819.789|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:17:19.007471528Z %3|1616735819.790|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: 1 request(s) timed out: disconnect (after 265317ms in state UP, 1 identical error(s) suppressed)
2021-03-26T05:23:03.686843053Z %3|1616736110.459|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: Failed to resolve 'kafka:9092': Temporary failure in name resolution (after 274501ms in state CONNECT)
2021-03-26T05:23:03.687444276Z %4|1616736121.562|MAXPOLL|rdkafka#consumer-2| [thrd:main]: Application maximum poll interval (300000ms) exceeded by 82392ms (adjust max.poll.interval.ms for long-running message processing): leaving group
2021-03-26T05:23:03.687454757Z %4|1616736181.132|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 0 in-flight, 1 retry-queued, 1 out-queue, 0 partially-sent requests
2021-03-26T05:23:03.687458027Z %4|1616736183.541|REQTMOUT|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out 0 in-flight, 0 retry-queued, 1 out-queue, 0 partially-sent requests
2021-03-26T05:23:03.687460727Z %3|1616736183.634|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: 1 request(s) timed out: disconnect (after 92ms in state APIVERSION_QUERY, 1 identical error(s) suppressed)
2021-03-26T05:23:03.687463457Z %3|1616736183.660|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Failed to resolve 'kafka:9092': Temporary failure in name resolution (after 447281ms in state CONNECT)
2021-03-26T05:23:03.687466327Z 2021-03-26 05:23:03,653 Partitions revoked: [Partition(topic=Topic(name='events'), index=0)]
2021-03-26T05:23:03.704597315Z 2021-03-26 05:23:03,692 Caught ConsumerError('KafkaError{code=_MAX_POLL_EXCEEDED,val=-147,str="Application maximum poll interval (300000ms) exceeded by 82392ms"}'), shutting down...
2021-03-26T05:23:04.061439325Z Traceback (most recent call last):
2021-03-26T05:23:04.061454205Z   File "/usr/local/bin/snuba", line 33, in <module>
2021-03-26T05:23:04.062207561Z     sys.exit(load_entry_point('snuba', 'console_scripts', 'snuba')())
2021-03-26T05:23:04.062215901Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 829, in __call__
2021-03-26T05:23:04.062612379Z     return self.main(*args, **kwargs)
2021-03-26T05:23:04.062621729Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 782, in main
2021-03-26T05:23:04.063600450Z     rv = self.invoke(ctx)
2021-03-26T05:23:04.063612760Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
2021-03-26T05:23:04.065347396Z     return _process_result(sub_ctx.command.invoke(sub_ctx))
2021-03-26T05:23:04.065355487Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
2021-03-26T05:23:04.066548202Z     return ctx.invoke(self.callback, **ctx.params)
2021-03-26T05:23:04.066555982Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 610, in invoke
2021-03-26T05:23:04.068745528Z     return callback(*args, **kwargs)
2021-03-26T05:23:04.068756968Z   File "/usr/src/snuba/snuba/cli/consumer.py", line 161, in consumer
2021-03-26T05:23:04.069202257Z     consumer.run()
2021-03-26T05:23:04.069215577Z   File "/usr/src/snuba/snuba/utils/streams/processing/processor.py", line 112, in run
2021-03-26T05:23:04.072663340Z     self._run_once()
2021-03-26T05:23:04.072680170Z   File "/usr/src/snuba/snuba/utils/streams/processing/processor.py", line 142, in _run_once
2021-03-26T05:23:04.073938037Z     self.__message = self.__consumer.poll(timeout=1.0)
2021-03-26T05:23:04.073947018Z   File "/usr/src/snuba/snuba/utils/streams/backends/kafka.py", line 773, in poll
2021-03-26T05:23:04.076875159Z     return super().poll(timeout)
2021-03-26T05:23:04.076886699Z   File "/usr/src/snuba/snuba/utils/streams/backends/kafka.py", line 405, in poll
2021-03-26T05:23:04.079607176Z     raise ConsumerError(str(error))
2021-03-26T05:23:04.079994774Z snuba.utils.streams.backends.abstract.ConsumerError: KafkaError{code=_MAX_POLL_EXCEEDED,val=-147,str="Application maximum poll interval (300000ms) exceeded by 82392ms"}
2021-03-26T05:23:23.786088554Z 2021-03-26 05:23:23,785 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 28}
2021-03-26T05:26:47.647856162Z %4|1616736399.435|SESSTMOUT|rdkafka#consumer-2| [thrd:main]: Consumer group session timed out (in join-state started) after 11949 ms without a successful response from the group coordinator (broker 1001, last error was Success): revoking assignment and rejoining group
2021-03-26T05:26:49.594861029Z %5|1616736407.680|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out HeartbeatRequest in flight (after 10087ms, timeout #0)
2021-03-26T05:26:51.943457605Z %4|1616736411.936|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:26:51.943768482Z %3|1616736411.940|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: 1 request(s) timed out: disconnect (after 208169ms in state UP)
2021-03-26T05:26:56.417719912Z 2021-03-26 05:26:52,003 Partitions revoked: [Partition(topic=Topic(name='events'), index=0)]
2021-03-26T05:27:09.111477504Z %5|1616736428.451|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out ApiVersionRequest in flight (after 9495ms, timeout #0)
2021-03-26T05:27:13.177882889Z %4|1616736432.945|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: ApiVersionRequest failed: Local: Timed out: probably due to broker version < 0.10 (see api.version.request configuration) (after 16092ms in state APIVERSION_QUERY)
2021-03-26T05:27:14.260886105Z %4|1616736432.962|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:27:32.079476922Z 2021-03-26 05:27:32,079 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 28}
2021-03-26T05:28:43.140356513Z %4|1616736523.130|SESSTMOUT|rdkafka#consumer-2| [thrd:main]: Consumer group session timed out (in join-state started) after 10491 ms without a successful response from the group coordinator (broker 1001, last error was Success): revoking assignment and rejoining group
2021-03-26T05:28:43.700943358Z 2021-03-26 05:28:43,537 Partitions revoked: [Partition(topic=Topic(name='events'), index=0)]
2021-03-26T05:28:46.603512191Z %5|1616736526.294|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out HeartbeatRequest in flight (after 10610ms, timeout #0)
2021-03-26T05:28:46.616523886Z %4|1616736526.294|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:28:46.616529906Z %3|1616736526.295|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: 2 request(s) timed out: disconnect (after 82301ms in state UP)
2021-03-26T05:29:03.456119617Z %5|1616736543.365|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out ApiVersionRequest in flight (after 10491ms, timeout #0)
2021-03-26T05:29:03.458626400Z %4|1616736543.400|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: ApiVersionRequest failed: Local: Timed out: probably due to broker version < 0.10 (see api.version.request configuration) (after 10583ms in state APIVERSION_QUERY)
2021-03-26T05:29:03.458633340Z %4|1616736543.401|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:29:20.629874429Z %5|1616736560.401|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out ApiVersionRequest in flight (after 10202ms, timeout #0)
2021-03-26T05:29:20.639488271Z %4|1616736560.401|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: ApiVersionRequest failed: Local: Timed out: probably due to broker version < 0.10 (see api.version.request configuration) (after 10932ms in state APIVERSION_QUERY, 1 identical error(s) suppressed)
2021-03-26T05:29:20.639510522Z %4|1616736560.402|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:29:35.123459132Z %5|1616736574.204|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out ApiVersionRequest in flight (after 10391ms, timeout #0)
2021-03-26T05:29:35.988397813Z %5|1616736574.204|REQTMOUT|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out FetchRequest in flight (after 61299ms, timeout #0)
2021-03-26T05:29:35.989106358Z %4|1616736574.263|REQTMOUT|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:29:35.989110118Z %4|1616736574.264|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:29:38.259483788Z %3|1616736577.864|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: 1 request(s) timed out: disconnect (after 370496ms in state UP)
2021-03-26T05:30:01.584296955Z %5|1616736601.223|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out ApiVersionRequest in flight (after 7018ms, timeout #0)
2021-03-26T05:30:01.703657195Z %4|1616736601.295|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: ApiVersionRequest failed: Local: Timed out: probably due to broker version < 0.10 (see api.version.request configuration) (after 13640ms in state APIVERSION_QUERY, 2 identical error(s) suppressed)
2021-03-26T05:30:01.704055253Z %4|1616736601.296|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:30:53.317229007Z %3|1616736645.471|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: Failed to resolve 'kafka:9092': Temporary failure in name resolution (after 40537ms in state CONNECT)
2021-03-26T05:30:53.317380870Z %4|1616736646.444|REQTMOUT|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out 0 in-flight, 0 retry-queued, 1 out-queue, 0 partially-sent requests
2021-03-26T05:31:09.077255692Z %4|1616736653.005|REQTMOUT|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out 0 in-flight, 0 retry-queued, 3 out-queue, 0 partially-sent requests
2021-03-26T05:31:15.987189858Z %4|1616736675.971|REQTMOUT|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out 0 in-flight, 1 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:31:20.139147702Z 2021-03-26 05:31:15,948 Caught ConsumerError('KafkaError{code=_TIMED_OUT,val=-185,str="FindCoordinator response error: Local: Timed out"}'), shutting down...
2021-03-26T05:31:28.585469705Z Traceback (most recent call last):
2021-03-26T05:31:28.588165892Z   File "/usr/local/bin/snuba", line 33, in <module>
2021-03-26T05:31:33.307483958Z     sys.exit(load_entry_point('snuba', 'console_scripts', 'snuba')())
2021-03-26T05:31:33.308122422Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 829, in __call__
2021-03-26T05:31:33.308127242Z     return self.main(*args, **kwargs)
2021-03-26T05:31:33.308129872Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 782, in main
2021-03-26T05:31:33.308132522Z     rv = self.invoke(ctx)
2021-03-26T05:31:33.308134882Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
2021-03-26T05:31:33.308137482Z     return _process_result(sub_ctx.command.invoke(sub_ctx))
2021-03-26T05:31:33.308140122Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
2021-03-26T05:31:33.308148633Z     return ctx.invoke(self.callback, **ctx.params)
2021-03-26T05:31:33.308151333Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 610, in invoke
2021-03-26T05:31:33.308153933Z     return callback(*args, **kwargs)
2021-03-26T05:31:33.308156453Z   File "/usr/src/snuba/snuba/cli/consumer.py", line 161, in consumer
2021-03-26T05:31:33.686638256Z     consumer.run()
2021-03-26T05:31:33.825992207Z   File "/usr/src/snuba/snuba/utils/streams/processing/processor.py", line 112, in run
2021-03-26T05:31:34.876379211Z     self._run_once()
2021-03-26T05:31:35.012411921Z   File "/usr/src/snuba/snuba/utils/streams/processing/processor.py", line 142, in _run_once
2021-03-26T05:31:35.012428622Z     self.__message = self.__consumer.poll(timeout=1.0)
2021-03-26T05:31:35.012431812Z   File "/usr/src/snuba/snuba/utils/streams/backends/kafka.py", line 773, in poll
2021-03-26T05:31:35.012439472Z     return super().poll(timeout)
2021-03-26T05:31:35.012442072Z   File "/usr/src/snuba/snuba/utils/streams/backends/kafka.py", line 405, in poll
2021-03-26T05:31:35.012444692Z     raise ConsumerError(str(error))
2021-03-26T05:31:35.012447262Z snuba.utils.streams.backends.abstract.ConsumerError: KafkaError{code=_TIMED_OUT,val=-185,str="FindCoordinator response error: Local: Timed out"}
2021-03-26T05:32:21.770862689Z 2021-03-26 05:32:21,770 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 28}
2021-03-26T05:32:29.967329963Z 2021-03-26 05:32:29,967 Completed processing <Batch: 1 message, open for 1.03 seconds>.
2021-03-26T05:36:25.659905754Z %4|1616736985.339|SESSTMOUT|rdkafka#consumer-2| [thrd:main]: Consumer group session timed out (in join-state started) after 11458 ms without a successful response from the group coordinator (broker 1001, last error was Success): revoking assignment and rejoining group
2021-03-26T05:36:28.667485331Z %5|1616736988.508|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out HeartbeatRequest in flight (after 10361ms, timeout #0)
2021-03-26T05:36:28.675691524Z %4|1616736988.509|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:36:28.675699154Z %3|1616736988.551|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: 1 request(s) timed out: disconnect (after 249768ms in state UP)
2021-03-26T05:36:47.467465086Z %5|1616737007.228|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out ApiVersionRequest in flight (after 10149ms, timeout #0)
2021-03-26T05:36:47.557100877Z %4|1616737007.228|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: ApiVersionRequest failed: Local: Timed out: probably due to broker version < 0.10 (see api.version.request configuration) (after 10151ms in state APIVERSION_QUERY)
2021-03-26T05:36:47.557423713Z %4|1616737007.229|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:36:50.286964993Z 2021-03-26 05:36:41,872 Partitions revoked: [Partition(topic=Topic(name='events'), index=0)]
2021-03-26T05:37:01.127500335Z %5|1616737019.921|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out ApiVersionRequest in flight (after 10540ms, timeout #0)
2021-03-26T05:37:01.129202310Z %4|1616737019.979|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: ApiVersionRequest failed: Local: Timed out: probably due to broker version < 0.10 (see api.version.request configuration) (after 10587ms in state APIVERSION_QUERY, 1 identical error(s) suppressed)
2021-03-26T05:37:05.281561675Z %4|1616737025.093|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:37:07.507238735Z %5|1616737027.079|REQTMOUT|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out FetchRequest in flight (after 61659ms, timeout #0)
2021-03-26T05:37:07.517455930Z %4|1616737027.079|REQTMOUT|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:37:07.517465910Z %3|1616737027.080|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: 1 request(s) timed out: disconnect (after 288308ms in state UP)
2021-03-26T05:37:25.487525177Z 2021-03-26 05:37:23,454 Caught ConsumerError('KafkaError{code=_TIMED_OUT,val=-185,str="FindCoordinator response error: Local: Timed out"}'), shutting down...
2021-03-26T05:38:20.896959403Z %5|1616737100.731|REQTMOUT|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out MetadataRequest in flight (after 60535ms, timeout #0)
2021-03-26T05:38:22.395481120Z %4|1616737100.731|REQTMOUT|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:38:24.254326172Z %3|1616737100.732|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: 1 request(s) timed out: disconnect (after 361966ms in state UP)
2021-03-26T05:39:29.027519275Z %4|1616737164.780|REQTMOUT|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out 0 in-flight, 0 retry-queued, 1 out-queue, 0 partially-sent requests
2021-03-26T05:40:20.445969982Z Traceback (most recent call last):
2021-03-26T05:40:20.488073177Z   File "/usr/local/bin/snuba", line 33, in <module>
2021-03-26T05:40:20.489965227Z     sys.exit(load_entry_point('snuba', 'console_scripts', 'snuba')())
2021-03-26T05:40:20.489973347Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 829, in __call__
2021-03-26T05:40:20.493291687Z     return self.main(*args, **kwargs)
2021-03-26T05:40:20.493305498Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 782, in main
2021-03-26T05:40:20.494182226Z     rv = self.invoke(ctx)
2021-03-26T05:40:20.494189636Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
2021-03-26T05:40:20.501118122Z     return _process_result(sub_ctx.command.invoke(sub_ctx))
2021-03-26T05:40:20.501146172Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
2021-03-26T05:40:20.501149372Z     return ctx.invoke(self.callback, **ctx.params)
2021-03-26T05:40:20.501151922Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 610, in invoke
2021-03-26T05:40:20.501154482Z     return callback(*args, **kwargs)
2021-03-26T05:40:20.501156762Z   File "/usr/src/snuba/snuba/cli/consumer.py", line 161, in consumer
2021-03-26T05:40:20.501159472Z     consumer.run()
2021-03-26T05:40:20.501161983Z   File "/usr/src/snuba/snuba/utils/streams/processing/processor.py", line 112, in run
2021-03-26T05:40:20.501164603Z     self._run_once()
2021-03-26T05:40:20.501166773Z   File "/usr/src/snuba/snuba/utils/streams/processing/processor.py", line 142, in _run_once
2021-03-26T05:40:20.501169313Z     self.__message = self.__consumer.poll(timeout=1.0)
2021-03-26T05:40:20.501171593Z   File "/usr/src/snuba/snuba/utils/streams/backends/kafka.py", line 773, in poll
2021-03-26T05:40:20.501179963Z     return super().poll(timeout)
2021-03-26T05:40:20.501182773Z   File "/usr/src/snuba/snuba/utils/streams/backends/kafka.py", line 405, in poll
2021-03-26T05:40:20.501185333Z     raise ConsumerError(str(error))
2021-03-26T05:40:20.501187753Z snuba.utils.streams.backends.abstract.ConsumerError: KafkaError{code=_TIMED_OUT,val=-185,str="FindCoordinator response error: Local: Timed out"}
2021-03-26T05:40:45.084385438Z 2021-03-26 05:40:45,084 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 29}
2021-03-26T05:42:58.674356534Z %4|1616737378.585|SESSTMOUT|rdkafka#consumer-2| [thrd:main]: Consumer group session timed out (in join-state started) after 10290 ms without a successful response from the group coordinator (broker 1001, last error was Success): revoking assignment and rejoining group
2021-03-26T05:43:02.509766610Z %5|1616737382.388|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out HeartbeatRequest in flight (after 10276ms, timeout #0)
2021-03-26T05:43:02.515738795Z %4|1616737382.388|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:43:04.055939376Z %3|1616737383.868|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: 1 request(s) timed out: disconnect (after 138467ms in state UP)
2021-03-26T05:43:11.774788678Z 2021-03-26 05:43:04,787 Partitions revoked: [Partition(topic=Topic(name='events'), index=0)]
2021-03-26T05:43:21.555674001Z %5|1616737401.384|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out ApiVersionRequest in flight (after 10672ms, timeout #0)
2021-03-26T05:43:21.557938928Z %4|1616737401.385|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: ApiVersionRequest failed: Local: Timed out: probably due to broker version < 0.10 (see api.version.request configuration) (after 12589ms in state APIVERSION_QUERY)
2021-03-26T05:43:21.558251584Z %4|1616737401.415|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:43:35.015482530Z %5|1616737414.851|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out ApiVersionRequest in flight (after 9992ms, timeout #0)
2021-03-26T05:43:35.017701826Z %4|1616737414.853|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: ApiVersionRequest failed: Local: Timed out: probably due to broker version < 0.10 (see api.version.request configuration) (after 10624ms in state APIVERSION_QUERY, 1 identical error(s) suppressed)
2021-03-26T05:43:35.023161601Z %4|1616737414.853|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:43:48.183968866Z %5|1616737427.862|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out ApiVersionRequest in flight (after 10307ms, timeout #0)
2021-03-26T05:43:49.749813363Z %4|1616737429.311|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:43:50.779963689Z %5|1616737430.759|REQTMOUT|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out FetchRequest in flight (after 61339ms, timeout #0)
2021-03-26T05:43:50.779982509Z %4|1616737430.759|REQTMOUT|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:43:50.779986159Z %3|1616737430.759|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: 1 request(s) timed out: disconnect (after 186839ms in state UP)
2021-03-26T05:44:08.669847994Z 2021-03-26 05:43:57,645 Caught ConsumerError('KafkaError{code=_TIMED_OUT,val=-185,str="FindCoordinator response error: Local: Timed out"}'), shutting down...
2021-03-26T05:44:23.116447956Z Traceback (most recent call last):
2021-03-26T05:44:23.989317610Z   File "/usr/local/bin/snuba", line 33, in <module>
2021-03-26T05:44:23.989505494Z     sys.exit(load_entry_point('snuba', 'console_scripts', 'snuba')())
2021-03-26T05:44:23.989512384Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 829, in __call__
2021-03-26T05:44:23.989515534Z     return self.main(*args, **kwargs)
2021-03-26T05:44:23.989518004Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 782, in main
2021-03-26T05:44:23.989520634Z     rv = self.invoke(ctx)
2021-03-26T05:44:23.989523404Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
2021-03-26T05:44:23.989526094Z     return _process_result(sub_ctx.command.invoke(sub_ctx))
2021-03-26T05:44:23.989528594Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
2021-03-26T05:44:23.989531185Z     return ctx.invoke(self.callback, **ctx.params)
2021-03-26T05:44:23.989533675Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 610, in invoke
2021-03-26T05:44:23.989536125Z     return callback(*args, **kwargs)
2021-03-26T05:44:23.989538415Z   File "/usr/src/snuba/snuba/cli/consumer.py", line 161, in consumer
2021-03-26T05:44:24.453126817Z     consumer.run()
2021-03-26T05:44:24.453784191Z   File "/usr/src/snuba/snuba/utils/streams/processing/processor.py", line 112, in run
2021-03-26T05:44:24.541265880Z     self._run_once()
2021-03-26T05:44:24.541479654Z   File "/usr/src/snuba/snuba/utils/streams/processing/processor.py", line 142, in _run_once
2021-03-26T05:44:24.542224620Z     self.__message = self.__consumer.poll(timeout=1.0)
2021-03-26T05:44:24.542348362Z   File "/usr/src/snuba/snuba/utils/streams/backends/kafka.py", line 773, in poll
2021-03-26T05:44:24.544151070Z     return super().poll(timeout)
2021-03-26T05:44:24.546994240Z   File "/usr/src/snuba/snuba/utils/streams/backends/kafka.py", line 405, in poll
2021-03-26T05:44:24.547396188Z     raise ConsumerError(str(error))
2021-03-26T05:44:25.320483456Z snuba.utils.streams.backends.abstract.ConsumerError: KafkaError{code=_TIMED_OUT,val=-185,str="FindCoordinator response error: Local: Timed out"}
2021-03-26T05:44:58.692253491Z 2021-03-26 05:44:58,690 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 29}
2021-03-26T05:45:34.457360073Z %4|1616737534.369|SESSTMOUT|rdkafka#consumer-2| [thrd:main]: Consumer group session timed out (in join-state started) after 10865 ms without a successful response from the group coordinator (broker 1001, last error was Success): revoking assignment and rejoining group
2021-03-26T05:45:39.543782194Z %5|1616737538.832|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out HeartbeatRequest in flight (after 12264ms, timeout #0)
2021-03-26T05:45:39.544532950Z %4|1616737538.847|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:45:54.519461412Z %3|1616737554.281|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: 1 request(s) timed out: disconnect (after 50876ms in state UP)
2021-03-26T05:46:17.299521094Z %4|1616737577.124|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 0 in-flight, 0 retry-queued, 1 out-queue, 0 partially-sent requests
2021-03-26T05:46:17.301753621Z %3|1616737577.124|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: 1 request(s) timed out: disconnect (after 2380ms in state APIVERSION_QUERY, 1 identical error(s) suppressed)
2021-03-26T05:46:26.551520965Z %5|1616737585.209|REQTMOUT|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out FetchRequest in flight (after 60720ms, timeout #0)
2021-03-26T05:46:26.556096171Z %4|1616737585.302|REQTMOUT|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:46:53.030974732Z %3|1616737612.463|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: 1 request(s) timed out: disconnect (after 89626ms in state UP)
2021-03-26T05:47:14.685426595Z %5|1616737634.083|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out ApiVersionRequest in flight (after 16460ms, timeout #0)
2021-03-26T05:47:14.793660589Z %4|1616737634.091|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: ApiVersionRequest failed: Local: Timed out: probably due to broker version < 0.10 (see api.version.request configuration) (after 17803ms in state APIVERSION_QUERY)
2021-03-26T05:47:14.793664990Z %4|1616737634.091|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:47:15.907304517Z %5|1616737635.633|REQTMOUT|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out ApiVersionRequest in flight (after 10378ms, timeout #0)
2021-03-26T05:47:15.909612655Z %4|1616737635.691|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: ApiVersionRequest failed: Local: Timed out: probably due to broker version < 0.10 (see api.version.request configuration) (after 13010ms in state APIVERSION_QUERY)
2021-03-26T05:47:16.923480435Z %4|1616737636.261|REQTMOUT|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:47:49.559467752Z %5|1616737663.193|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out ApiVersionRequest in flight (after 27896ms, timeout #0)
2021-03-26T05:48:31.221347887Z %4|1616737710.569|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: ApiVersionRequest failed: Local: Timed out: probably due to broker version < 0.10 (see api.version.request configuration) (after 33896ms in state APIVERSION_QUERY, 1 identical error(s) suppressed)
2021-03-26T05:48:34.996889396Z %4|1616737714.736|REQTMOUT|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator/1001: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
2021-03-26T05:48:36.331480620Z %4|1616737715.343|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: ApiVersionRequest failed: Local: Timed out in queue: probably due to broker version < 0.10 (see api.version.request configuration) (after 17096ms in state APIVERSION_QUERY)
2021-03-26T05:48:36.336373482Z %4|1616737715.343|REQTMOUT|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/1001: Timed out 0 in-flight, 0 retry-queued, 1 out-queue, 0 partially-sent requests
2021-03-26T05:48:58.931477866Z 2021-03-26 05:46:16,208 Partitions revoked: [Partition(topic=Topic(name='events'), index=0)]
2021-03-26T05:49:02.008726712Z 2021-03-26 05:49:00,078 Caught ConsumerError('KafkaError{code=_TIMED_OUT,val=-185,str="FindCoordinator response error: Local: Timed out"}'), shutting down...
2021-03-26T05:49:04.765999268Z Traceback (most recent call last):
2021-03-26T05:49:04.767484239Z   File "/usr/local/bin/snuba", line 33, in <module>
2021-03-26T05:49:05.277003696Z     sys.exit(load_entry_point('snuba', 'console_scripts', 'snuba')())
2021-03-26T05:49:05.279402556Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 829, in __call__
2021-03-26T05:49:05.279423506Z     return self.main(*args, **kwargs)
2021-03-26T05:49:05.279427007Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 782, in main
2021-03-26T05:49:05.279444237Z     rv = self.invoke(ctx)
2021-03-26T05:49:05.279450827Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
2021-03-26T05:49:05.279453397Z     return _process_result(sub_ctx.command.invoke(sub_ctx))
2021-03-26T05:49:05.279455697Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
2021-03-26T05:49:05.279710913Z     return ctx.invoke(self.callback, **ctx.params)
2021-03-26T05:49:05.279770354Z   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 610, in invoke
2021-03-26T05:49:05.280125392Z     return callback(*args, **kwargs)
2021-03-26T05:49:05.280194823Z   File "/usr/src/snuba/snuba/cli/consumer.py", line 161, in consumer
2021-03-26T05:49:05.283066403Z     consumer.run()
2021-03-26T05:49:05.283278508Z   File "/usr/src/snuba/snuba/utils/streams/processing/processor.py", line 112, in run
2021-03-26T05:49:05.295445134Z     self._run_once()
2021-03-26T05:49:05.295545567Z   File "/usr/src/snuba/snuba/utils/streams/processing/processor.py", line 142, in _run_once
2021-03-26T05:49:05.299794506Z     self.__message = self.__consumer.poll(timeout=1.0)
2021-03-26T05:49:05.299868868Z   File "/usr/src/snuba/snuba/utils/streams/backends/kafka.py", line 773, in poll
2021-03-26T05:49:05.304579297Z     return super().poll(timeout)
2021-03-26T05:49:05.304656419Z   File "/usr/src/snuba/snuba/utils/streams/backends/kafka.py", line 405, in poll
2021-03-26T05:49:05.304964656Z     raise ConsumerError(str(error))
2021-03-26T05:49:05.305290292Z snuba.utils.streams.backends.abstract.ConsumerError: KafkaError{code=_TIMED_OUT,val=-185,str="FindCoordinator response error: Local: Timed out"}
2021-03-26T05:52:35.565502635Z 2021-03-26 05:52:35,565 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 29}
2021-03-26T05:52:43.014035434Z 2021-03-26 05:52:43,013 Completed processing <Batch: 2 messages, open for 1.03 seconds>.
2021-03-26T06:44:41.633785085Z 2021-03-26 06:44:41,633 Completed processing <Batch: 1 message, open for 1.02 seconds>.
2021-03-26T07:23:32.476365004Z 2021-03-26 07:23:32,476 Completed processing <Batch: 1 message, open for 1.01 seconds>.

look like the restarts were related to kafka issue. but have no idea why. the only assumption that not enough CPU and Memory on machine

BYK commented 3 years ago

the only assumption that not enough CPU and Memory on machine

That is correct. If you look at Clickhouse and install logs they both warn you about the memory being too small (our recommended size is 8GB). You also have a warning regarding your CPU cores. 2 logical cores are barely enough. We strongly recommend 4 cores.

All the sudden restarts seem to be out of memory kills so I'm going to close this issue, closing with a recommendation of using a machine with more memory and CPU cores.

You'll also need a lot more storage than you used to.

romankydybets commented 3 years ago

@BYK
how much storage i need? i use 30GB of storage your recommendation was 20GB

Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        1.9G     0  1.9G   0% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           1.9G  2.6M  1.9G   1% /run
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/nvme0n1p1   30G  7.7G   23G  26% /
tmpfs           389M     0  389M   0% /run/user/1000
BYK commented 3 years ago

your recommendation was 20GB

That's the bare minimum to run a very low-volume instance. I'd use at least a couple of hundred GB as transactions pile up quite quickly.

romankydybets commented 3 years ago

@BYK thanks you, i think for our set-up we will back to the version 9.1.2 which was working great

BYK commented 3 years ago

@Kydybets no worries! Latest versions of Sentry come with a lot more feastures including performance monitoring and metric alerts and much better search and visualization capabilities. That said all these don't come for free so use what works best for you :)

But also keep in mind that Sentry 9.1.2 uses Python 2.7 which end-of-lifed last year and is not getting any more security updates.

romankydybets commented 3 years ago

@BYK unfortunately we are a startup that does not have some spear money in the pocket. and we are using only issues monitoring and alerting in the slack. that's why I don't realize these benefits, maybe it worse to pay for the online version in our case.