Closed zhouyangit closed 3 years ago
Docker version 20.10.7, build f0df350 docker-compose version 1.29.2, build 5becea4c centos 7.9
./install.sh docker-compose up -d
able to reach the login page at http://my ip:9000
What you saw along the way, e.g.:
latest install logs: ls -1 sentry_install_log-*.txt | tail -1 | xargs cat
ls -1 sentry_install_log-*.txt | tail -1 | xargs cat
docker-compose logs output`
docker-compose logs
Attaching to sentry_onpremise_nginx_1, sentry_onpremise_relay_1, sentry_onpremise_subscription-consumer-events_1, sentry_onpremise_ingest-consumer_1, sentry_onpremise_worker_1, sentry_onpremise_web_1, sentry_onpremise_sentry-cleanup_1, sentry_onpremise_subscription-consumer-transactions_1, sentry_onpremise_post-process-forwarder_1, sentry_onpremise_cron_1, sentry_onpremise_snuba-sessions-consumer_1, sentry_onpremise_snuba-transactions-cleanup_1, sentry_onpremise_snuba-subscription-consumer-transactions_1, sentry_onpremise_snuba-outcomes-consumer_1, sentry_onpremise_snuba-api_1, sentry_onpremise_snuba-subscription-consumer-events_1, sentry_onpremise_snuba-replacer_1, sentry_onpremise_snuba-transactions-consumer_1, sentry_onpremise_snuba-consumer_1, sentry_onpremise_snuba-cleanup_1, sentry_onpremise_kafka_1, sentry_onpremise_smtp_1, sentry_onpremise_symbolicator_1, sentry_onpremise_symbolicator-cleanup_1, sentry_onpremise_redis_1, sentry_onpremise_memcached_1, sentry_onpremise_clickhouse_1, sentry_onpremise_postgres_1, sentry_onpremise_geoipupdate_1, sentry_onpremise_zookeeper_1 cron_1 | 03:26:10 [INFO] sentry.plugins.github: apps-not-configured kafka_1 | ===> ENV Variables ... kafka_1 | ALLOW_UNSIGNED=false kafka_1 | COMPONENT=kafka kafka_1 | CONFLUENT_DEB_VERSION=1 kafka_1 | CONFLUENT_PLATFORM_LABEL= kafka_1 | CONFLUENT_SUPPORT_METRICS_ENABLE=false kafka_1 | CONFLUENT_VERSION=5.5.0 kafka_1 | CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar kafka_1 | HOME=/root kafka_1 | HOSTNAME=ddbebc6d726e kafka_1 | KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 kafka_1 | KAFKA_LOG4J_LOGGERS=kafka.cluster=WARN,kafka.controller=WARN,kafka.coordinator=WARN,kafka.log=WARN,kafka.server=WARN,kafka.zookeeper=WARN,state.change.logger=WARN kafka_1 | KAFKA_LOG4J_ROOT_LOGLEVEL=WARN kafka_1 | KAFKA_LOG_RETENTION_HOURS=24 kafka_1 | KAFKA_MAX_REQUEST_SIZE=50000000 kafka_1 | KAFKA_MESSAGE_MAX_BYTES=50000000 kafka_1 | KAFKA_OFFSETS_TOPIC_NUM_PARTITIONS=1 kafka_1 | KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 kafka_1 | KAFKA_TOOLS_LOG4J_LOGLEVEL=WARN kafka_1 | KAFKA_VERSION= kafka_1 | KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 kafka_1 | LANG=C.UTF-8 kafka_1 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin kafka_1 | PWD=/ kafka_1 | PYTHON_PIP_VERSION=8.1.2 kafka_1 | PYTHON_VERSION=2.7.9-1 kafka_1 | SCALA_VERSION=2.12 kafka_1 | SHLVL=1 kafka_1 | ZULU_OPENJDK_VERSION=8=8.38.0.13 kafka_1 | _=/usr/bin/env kafka_1 | ===> User kafka_1 | uid=0(root) gid=0(root) groups=0(root) kafka_1 | ===> Configuring ... kafka_1 | ===> Running preflight checks ... kafka_1 | ===> Check if /var/lib/kafka/data is writable ... kafka_1 | ===> Check if Zookeeper is healthy ... kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=ddbebc6d726e kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_212 kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Azul Systems, Inc. kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/etc/confluent/docker/docker-utils.jar kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA> kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64 kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=3.10.0-123.el7.x86_64 kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=root kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/root kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/ kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.free=476MB kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.max=7115MB kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.total=481MB kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@cc34f4d kafka_1 | [main] INFO org.apache.zookeeper.common.X509Util - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation kafka_1 | [main] INFO org.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer value is 4194304 Bytes kafka_1 | [main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.request.timeout value is 0. feature enabled= kafka_1 | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/172.22.0.2:2181. Will not attempt to authenticate using SASL (unknown error) kafka_1 | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /172.22.0.3:48763, server: zookeeper/172.22.0.2:2181 kafka_1 | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zookeeper/172.22.0.2:2181, sessionid = 0x1000a6e98810000, negotiated timeout = 40000 kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x1000a6e98810000 closed kafka_1 | [main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x1000a6e98810000 kafka_1 | ===> Launching ... kafka_1 | ===> Launching kafka ... kafka_1 | [2021-06-18 03:25:54,996] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka_1 | [2021-06-18 03:25:57,111] WARN The package io.confluent.support.metrics.collectors.FullCollector for collecting the full set of support metrics could not be loaded, so we are reverting to anonymous, basic metric collection. If you are a Confluent customer, please refer to the Confluent Platform documentation, section Proactive Support, on how to activate full metrics collection. (io.confluent.support.metrics.KafkaSupportConfig) kafka_1 | [2021-06-18 03:25:57,111] WARN The support metrics collection feature ("Metrics") of Proactive Support is disabled. (io.confluent.support.metrics.SupportedServerStartable) kafka_1 | [2021-06-18 03:25:59,175] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka_1 | [2021-06-18 03:25:59,284] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) kafka_1 | [2021-06-18 03:26:00,353] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor) kafka_1 | [2021-06-18 03:26:00,473] INFO [SocketServer brokerId=1001] Created data-plane acceptor and processors for endpoint : EndPoint(0.0.0.0,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.network.SocketServer) kafka_1 | [2021-06-18 03:26:00,477] INFO [SocketServer brokerId=1001] Started 1 acceptor threads for data-plane (kafka.network.SocketServer) kafka_1 | [2021-06-18 03:26:00,679] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient) kafka_1 | [2021-06-18 03:26:00,736] INFO Stat of the created znode at /brokers/ids/1001 is: 223,223,1623986760725,1623986760725,1,0,0,72069064159199233,180,0,223 kafka_1 | (kafka.zk.KafkaZkClient) kafka_1 | [2021-06-18 03:26:00,736] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: ArrayBuffer(EndPoint(kafka,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 223 (kafka.zk.KafkaZkClient) kafka_1 | [2021-06-18 03:26:02,255] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka_1 | [2021-06-18 03:26:02,767] INFO [SocketServer brokerId=1001] Started data-plane processors for 1 acceptors (kafka.network.SocketServer) clickhouse_1 | Processing configuration file '/etc/clickhouse-server/config.xml'. clickhouse_1 | Merging configuration file '/etc/clickhouse-server/config.d/docker_related_config.xml'. clickhouse_1 | Merging configuration file '/etc/clickhouse-server/config.d/sentry.xml'. clickhouse_1 | Include not found: clickhouse_remote_servers clickhouse_1 | Include not found: clickhouse_compression clickhouse_1 | Logging information to /var/log/clickhouse-server/clickhouse-server.log clickhouse_1 | Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log clickhouse_1 | Logging information to console clickhouse_1 | 2021.06.18 03:25:36.007396 [ 1 ] {} <Information> : Starting ClickHouse 20.3.9.70 with revision 54433 clickhouse_1 | 2021.06.18 03:25:36.011448 [ 1 ] {} <Information> Application: starting up clickhouse_1 | Include not found: networks clickhouse_1 | 2021.06.18 03:25:36.044498 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/ clickhouse_1 | 2021.06.18 03:25:36.046616 [ 1 ] {} <Information> DatabaseOrdinary (system): Total 2 tables and 0 dictionaries. clickhouse_1 | 2021.06.18 03:25:36.053467 [ 46 ] {} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads clickhouse_1 | 2021.06.18 03:25:36.086870 [ 1 ] {} <Information> DatabaseOrdinary (system): Starting up tables. clickhouse_1 | 2021.06.18 03:25:36.101894 [ 1 ] {} <Information> DatabaseOrdinary (default): Total 13 tables and 0 dictionaries. clickhouse_1 | 2021.06.18 03:25:36.121578 [ 1 ] {} <Information> DatabaseOrdinary (default): Starting up tables. clickhouse_1 | 2021.06.18 03:25:36.125068 [ 1 ] {} <Information> BackgroundSchedulePool: Create BackgroundSchedulePool with 16 threads clickhouse_1 | 2021.06.18 03:25:36.125576 [ 1 ] {} <Information> Application: It looks like the process has no CAP_NET_ADMIN capability, 'taskstats' performance statistics will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_net_admin=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems. It also doesn't work if you run clickhouse-server inside network namespace as it happens in some containers. clickhouse_1 | 2021.06.18 03:25:36.125602 [ 1 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_nice' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems. clickhouse_1 | 2021.06.18 03:25:36.138654 [ 1 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host> clickhouse_1 | 2021.06.18 03:25:36.139083 [ 1 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host> clickhouse_1 | 2021.06.18 03:25:36.139363 [ 1 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host> clickhouse_1 | 2021.06.18 03:25:36.139621 [ 1 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host> clickhouse_1 | 2021.06.18 03:25:36.139839 [ 1 ] {} <Information> Application: Listening for http://0.0.0.0:8123 clickhouse_1 | 2021.06.18 03:25:36.139985 [ 1 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000 clickhouse_1 | 2021.06.18 03:25:36.140058 [ 1 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009 clickhouse_1 | 2021.06.18 03:25:36.303765 [ 1 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004 clickhouse_1 | 2021.06.18 03:25:36.304564 [ 1 ] {} <Information> Application: Available RAM: 31.27 GiB; physical cores: 4; logical cores: 4. clickhouse_1 | 2021.06.18 03:25:36.304586 [ 1 ] {} <Information> Application: Ready for connections. clickhouse_1 | Include not found: clickhouse_remote_servers clickhouse_1 | Include not found: clickhouse_compression clickhouse_1 | 2021.06.18 03:30:03.401976 [ 86 ] {} <Information> TCPHandler: Processed in 0.003 sec. clickhouse_1 | 2021.06.18 03:30:03.453874 [ 86 ] {} <Information> TCPHandler: Done processing connection. clickhouse_1 | 2021.06.18 03:30:03.923984 [ 86 ] {} <Information> TCPHandler: Processed in 0.002 sec. clickhouse_1 | 2021.06.18 03:30:03.979010 [ 86 ] {} <Information> TCPHandler: Done processing connection. clickhouse_1 | 2021.06.18 03:35:03.073589 [ 86 ] {} <Information> TCPHandler: Processed in 0.003 sec. clickhouse_1 | 2021.06.18 03:35:03.127937 [ 86 ] {} <Information> TCPHandler: Done processing connection. clickhouse_1 | 2021.06.18 03:35:03.594726 [ 86 ] {} <Information> TCPHandler: Processed in 0.006 sec. clickhouse_1 | 2021.06.18 03:35:03.642425 [ 86 ] {} <Information> TCPHandler: Done processing connection. clickhouse_1 | 2021.06.18 03:40:03.280689 [ 86 ] {} <Information> TCPHandler: Processed in 0.004 sec. clickhouse_1 | 2021.06.18 03:40:03.334825 [ 86 ] {} <Information> TCPHandler: Done processing connection. clickhouse_1 | 2021.06.18 03:40:03.751889 [ 86 ] {} <Information> TCPHandler: Processed in 0.003 sec. clickhouse_1 | 2021.06.18 03:40:03.800251 [ 86 ] {} <Information> TCPHandler: Done processing connection. ingest-consumer_1 | 03:26:13 [INFO] sentry.plugins.github: apps-not-configured ingest-consumer_1 | 03:26:17 [INFO] batching-kafka-consumer: New partitions assigned: [TopicPartition{topic=ingest-attachments,partition=0,offset=-1001,error=None}, TopicPartition{topic=ingest-events,partition=0,offset=-1001,error=None}, TopicPartition{topic=ingest-transactions,partition=0,offset=-1001,error=None}] geoipupdate_1 | error loading configuration file /sentry/GeoIP.conf: error opening file: open /sentry/GeoIP.conf: no such file or directory post-process-forwarder_1 | 03:26:12 [INFO] sentry.plugins.github: apps-not-configured post-process-forwarder_1 | 03:26:16 [INFO] sentry.eventstream.kafka.backend: Received partition assignment: [TopicPartition{topic=events,partition=0,offset=-1001,error=None}] postgres_1 | Setting up Change Data Capture postgres_1 | Replication config already present in pg_hba. Not changing anything. postgres_1 | postgres_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization postgres_1 | postgres_1 | LOG: database system was shut down at 2021-06-18 03:22:54 UTC postgres_1 | LOG: MultiXact member wraparound protections are now enabled postgres_1 | LOG: database system is ready to accept connections postgres_1 | LOG: autovacuum launcher started redis_1 | 1:C 18 Jun 2021 03:25:35.544 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo redis_1 | 1:C 18 Jun 2021 03:25:35.544 # Redis version=5.0.12, bits=64, commit=00000000, modified=0, pid=1, just started redis_1 | 1:C 18 Jun 2021 03:25:35.544 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf redis_1 | 1:M 18 Jun 2021 03:25:35.546 * Running mode=standalone, port=6379. redis_1 | 1:M 18 Jun 2021 03:25:35.546 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128. redis_1 | 1:M 18 Jun 2021 03:25:35.546 # Server initialized redis_1 | 1:M 18 Jun 2021 03:25:35.546 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. redis_1 | 1:M 18 Jun 2021 03:25:35.546 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled. redis_1 | 1:M 18 Jun 2021 03:25:35.554 * DB loaded from disk: 0.008 seconds redis_1 | 1:M 18 Jun 2021 03:25:35.554 * Ready to accept connections redis_1 | 1:M 18 Jun 2021 03:30:36.008 * 100 changes in 300 seconds. Saving... redis_1 | 1:M 18 Jun 2021 03:30:36.009 * Background saving started by pid 880 redis_1 | 880:C 18 Jun 2021 03:30:36.023 * DB saved on disk redis_1 | 880:C 18 Jun 2021 03:30:36.023 * RDB: 4 MB of memory used by copy-on-write redis_1 | 1:M 18 Jun 2021 03:30:36.110 * Background saving terminated with success redis_1 | 1:M 18 Jun 2021 03:35:37.044 * 100 changes in 300 seconds. Saving... redis_1 | 1:M 18 Jun 2021 03:35:37.045 * Background saving started by pid 1762 redis_1 | 1762:C 18 Jun 2021 03:35:37.053 * DB saved on disk redis_1 | 1762:C 18 Jun 2021 03:35:37.054 * RDB: 4 MB of memory used by copy-on-write redis_1 | 1:M 18 Jun 2021 03:35:37.145 * Background saving terminated with success redis_1 | 1:M 18 Jun 2021 03:40:38.088 * 100 changes in 300 seconds. Saving... redis_1 | 1:M 18 Jun 2021 03:40:38.089 * Background saving started by pid 2647 redis_1 | 2647:C 18 Jun 2021 03:40:38.103 * DB saved on disk redis_1 | 2647:C 18 Jun 2021 03:40:38.103 * RDB: 4 MB of memory used by copy-on-write redis_1 | 1:M 18 Jun 2021 03:40:38.189 * Background saving terminated with success sentry-cleanup_1 | SHELL=/bin/bash sentry-cleanup_1 | BASH_ENV=/container.env sentry-cleanup_1 | 0 0 * * * gosu sentry sentry cleanup --days 90 > /proc/1/fd/1 2>/proc/1/fd/2 smtp_1 | + sed -ri ' smtp_1 | s/^#?(dc_local_interfaces)=.*/\1='\''0.0.0.0 ; ::0'\''/; smtp_1 | s/^#?(dc_other_hostnames)=.*/\1='\'''\''/; smtp_1 | s/^#?(dc_relay_nets)=.*/\1='\''0.0.0.0\/0'\''/; smtp_1 | s/^#?(dc_eximconfig_configtype)=.*/\1='\''internet'\''/; smtp_1 | ' /etc/exim4/update-exim4.conf.conf smtp_1 | + update-exim4.conf -v smtp_1 | using non-split configuration scheme from /etc/exim4/exim4.conf.template smtp_1 | 271 LOG: MAIN smtp_1 | 271 exim 4.92 daemon started: pid=271, no queue runs, listening for SMTP on port 25 (IPv6 and IPv4) relay_1 | 2021-06-18T03:25:55Z [rdkafka::client] ERROR: librdkafka: FAIL [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 9ms in state CONNECT) relay_1 | 2021-06-18T03:25:55Z [rdkafka::client] ERROR: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 9ms in state CONNECT) relay_1 | 2021-06-18T03:25:55Z [rdkafka::client] ERROR: librdkafka: Global error: AllBrokersDown (Local: All broker connections are down): 1/1 brokers are down relay_1 | 2021-06-18T03:25:56Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream relay_1 | caused by: error sending request for url (http://web:9000/api/0/relays/register/challenge/): error trying to connect: tcp connect error: Connection refused (os error 111) relay_1 | 2021-06-18T03:25:56Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream relay_1 | caused by: error sending request for url (http://web:9000/api/0/relays/register/challenge/): error trying to connect: tcp connect error: Connection refused (os error 111) relay_1 | 2021-06-18T03:25:56Z [rdkafka::client] ERROR: librdkafka: FAIL [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 0ms in state CONNECT) relay_1 | 2021-06-18T03:25:56Z [rdkafka::client] ERROR: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 0ms in state CONNECT) relay_1 | 2021-06-18T03:25:56Z [rdkafka::client] ERROR: librdkafka: Global error: AllBrokersDown (Local: All broker connections are down): 1/1 brokers are down relay_1 | 2021-06-18T03:25:56Z [rdkafka::client] ERROR: librdkafka: FAIL [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed) relay_1 | 2021-06-18T03:25:56Z [rdkafka::client] ERROR: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed) relay_1 | 2021-06-18T03:25:57Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream relay_1 | caused by: error sending request for url (http://web:9000/api/0/relays/register/challenge/): error trying to connect: tcp connect error: Connection refused (os error 111) relay_1 | 2021-06-18T03:25:57Z [rdkafka::client] ERROR: librdkafka: FAIL [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 8ms in state CONNECT, 1 identical error(s) suppressed) relay_1 | 2021-06-18T03:25:57Z [rdkafka::client] ERROR: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 8ms in state CONNECT, 1 identical error(s) suppressed) relay_1 | 2021-06-18T03:25:58Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream relay_1 | caused by: error sending request for url (http://web:9000/api/0/relays/register/challenge/): error trying to connect: tcp connect error: Connection refused (os error 111) relay_1 | 2021-06-18T03:26:00Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream relay_1 | caused by: error sending request for url (http://web:9000/api/0/relays/register/challenge/): error trying to connect: tcp connect error: Connection refused (os error 111) relay_1 | 2021-06-18T03:26:04Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream relay_1 | caused by: error sending request for url (http://web:9000/api/0/relays/register/challenge/): error trying to connect: tcp connect error: Connection refused (os error 111) relay_1 | 2021-06-18T03:26:09Z [relay_server::actors::upstream] WARN: Network outage, scheduling another check in 0ns relay_1 | 2021-06-18T03:26:09Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream relay_1 | caused by: error sending request for url (http://web:9000/api/0/relays/register/challenge/): error trying to connect: tcp connect error: Connection refused (os error 111) relay_1 | 2021-06-18T03:26:09Z [relay_server::actors::upstream] WARN: Network outage, scheduling another check in 1s relay_1 | 2021-06-18T03:26:10Z [relay_server::actors::upstream] WARN: Network outage, scheduling another check in 1.5s relay_1 | 2021-06-18T03:26:11Z [relay_server::actors::upstream] WARN: Network outage, scheduling another check in 2.25s relay_1 | 2021-06-18T03:26:14Z [relay_server::actors::upstream] WARN: Network outage, scheduling another check in 3.375s relay_1 | 2021-06-18T03:26:21Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream relay_1 | caused by: error sending request for url (http://web:9000/api/0/relays/register/challenge/): operation timed out snuba-consumer_1 | %3|1623986748.657|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 16ms in state CONNECT) snuba-consumer_1 | %3|1623986748.660|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 12ms in state CONNECT) snuba-consumer_1 | %3|1623986749.644|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed) snuba-consumer_1 | %3|1623986749.646|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed) snuba-consumer_1 | 2021-06-18 03:26:08,373 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 0} snuba-cleanup_1 | SHELL=/bin/bash snuba-cleanup_1 | BASH_ENV=/container.env snuba-cleanup_1 | */5 * * * * gosu snuba snuba cleanup --storage errors --dry-run False > /proc/1/fd/1 2>/proc/1/fd/2 snuba-cleanup_1 | 2021-06-18 03:30:03,402 Dropped 0 partitions on clickhouse:9000 snuba-cleanup_1 | 2021-06-18 03:35:03,591 Dropped 0 partitions on clickhouse:9000 snuba-cleanup_1 | 2021-06-18 03:40:03,751 Dropped 0 partitions on clickhouse:9000 snuba-api_1 | *** Starting uWSGI 2.0.18 (64bit) on [Fri Jun 18 03:25:55 2021] *** snuba-api_1 | compiled with version: 8.3.0 on 07 June 2021 18:58:31 snuba-api_1 | os: Linux-3.10.0-123.el7.x86_64 #1 SMP Mon Jun 30 12:09:22 UTC 2014 snuba-api_1 | nodename: ebda0c90d5bc snuba-api_1 | machine: x86_64 snuba-api_1 | clock source: unix snuba-api_1 | pcre jit disabled snuba-api_1 | detected number of CPU cores: 4 snuba-api_1 | current working directory: /usr/src/snuba snuba-api_1 | detected binary path: /usr/local/bin/uwsgi snuba-api_1 | your memory page size is 4096 bytes snuba-api_1 | detected max file descriptor number: 1048576 snuba-api_1 | lock engine: pthread robust mutexes snuba-api_1 | thunder lock: enabled snuba-api_1 | uwsgi socket 0 bound to TCP address 0.0.0.0:1218 fd 3 snuba-api_1 | Python version: 3.8.10 (default, May 12 2021, 15:56:47) [GCC 8.3.0] snuba-api_1 | Set PythonHome to /usr/local snuba-api_1 | Python main interpreter initialized at 0x7fe69ea63bf0 snuba-api_1 | python threads support enabled snuba-api_1 | your server socket listen backlog is limited to 100 connections snuba-api_1 | your mercy for graceful operations on workers is 60 seconds snuba-api_1 | mapped 145808 bytes (142 KB) for 1 cores snuba-api_1 | *** Operational MODE: single process *** snuba-api_1 | initialized 38 metrics snuba-api_1 | spawned uWSGI master process (pid: 1) snuba-api_1 | spawned uWSGI worker 1 (pid: 15, cores: 1) snuba-api_1 | metrics collector thread started snuba-api_1 | WSGI app 0 (mountpoint='') ready in 2 seconds on interpreter 0x7fe69ea63bf0 pid: 15 (default app) snuba-outcomes-consumer_1 | %3|1623986755.778|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 10ms in state CONNECT) snuba-outcomes-consumer_1 | %3|1623986755.778|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 10ms in state CONNECT) snuba-outcomes-consumer_1 | %3|1623986756.760|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed) snuba-outcomes-consumer_1 | %3|1623986756.762|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed) snuba-outcomes-consumer_1 | 2021-06-18 03:26:08,388 New partitions assigned: {Partition(topic=Topic(name='outcomes'), index=0): 0} snuba-sessions-consumer_1 | %3|1623986755.811|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 16ms in state CONNECT) snuba-sessions-consumer_1 | %3|1623986755.812|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 17ms in state CONNECT) snuba-sessions-consumer_1 | %3|1623986756.794|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed) snuba-sessions-consumer_1 | %3|1623986756.795|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed) snuba-sessions-consumer_1 | 2021-06-18 03:26:08,433 New partitions assigned: {Partition(topic=Topic(name='ingest-sessions'), index=0): 0} snuba-replacer_1 | %3|1623986753.650|FAIL|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 47ms in state CONNECT) snuba-replacer_1 | %3|1623986754.603|FAIL|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed) snuba-replacer_1 | 2021-06-18 03:26:08,373 New partitions assigned: {Partition(topic=Topic(name='event-replacements'), index=0): 0} snuba-subscription-consumer-events_1 | %3|1623986755.166|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 37ms in state CONNECT) snuba-subscription-consumer-events_1 | %3|1623986756.103|FAIL|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 0ms in state CONNECT) snuba-subscription-consumer-events_1 | %3|1623986756.114|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed) snuba-subscription-consumer-events_1 | %3|1623986757.108|FAIL|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 3ms in state CONNECT, 1 identical error(s) suppressed) snuba-subscription-consumer-events_1 | Traceback (most recent call last): snuba-subscription-consumer-events_1 | File "/usr/local/bin/snuba", line 33, in <module> snuba-subscription-consumer-events_1 | sys.exit(load_entry_point('snuba', 'console_scripts', 'snuba')()) snuba-subscription-consumer-events_1 | File "/usr/local/lib/python3.8/site-packages/click/core.py", line 829, in __call__ snuba-subscription-consumer-events_1 | return self.main(*args, **kwargs) snuba-subscription-consumer-events_1 | File "/usr/local/lib/python3.8/site-packages/click/core.py", line 782, in main snuba-subscription-consumer-events_1 | rv = self.invoke(ctx) snuba-subscription-consumer-events_1 | File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1259, in invoke snuba-subscription-consumer-events_1 | return _process_result(sub_ctx.command.invoke(sub_ctx)) snuba-subscription-consumer-events_1 | File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1066, in invoke snuba-subscription-consumer-events_1 | return ctx.invoke(self.callback, **ctx.params) snuba-subscription-consumer-events_1 | File "/usr/local/lib/python3.8/site-packages/click/core.py", line 610, in invoke snuba-subscription-consumer-events_1 | return callback(*args, **kwargs) snuba-subscription-consumer-events_1 | File "/usr/src/snuba/snuba/cli/subscriptions.py", line 136, in subscriptions snuba-subscription-consumer-events_1 | SynchronizedConsumer( snuba-subscription-consumer-events_1 | File "/usr/src/snuba/arroyo/synchronized.py", line 102, in __init__ snuba-subscription-consumer-events_1 | self.__commit_log_worker.result() snuba-subscription-consumer-events_1 | File "/usr/local/lib/python3.8/concurrent/futures/_base.py", line 437, in result snuba-subscription-consumer-events_1 | return self.__get_result() snuba-subscription-consumer-events_1 | File "/usr/local/lib/python3.8/concurrent/futures/_base.py", line 389, in __get_result snuba-subscription-consumer-events_1 | raise self._exception snuba-subscription-consumer-events_1 | File "/usr/src/snuba/arroyo/concurrent.py", line 32, in run snuba-subscription-consumer-events_1 | result = function() snuba-subscription-consumer-events_1 | File "/usr/src/snuba/arroyo/synchronized.py", line 126, in __run_commit_log_worker snuba-subscription-consumer-events_1 | message = self.__commit_log_consumer.poll(0.1) snuba-subscription-consumer-events_1 | File "/usr/src/snuba/arroyo/backends/kafka/consumer.py", line 393, in poll snuba-subscription-consumer-events_1 | raise ConsumerError(str(error)) snuba-subscription-consumer-events_1 | arroyo.errors.ConsumerError: KafkaError{code=COORDINATOR_LOAD_IN_PROGRESS,val=14,str="JoinGroup failed: Broker: Coordinator load in progress"} snuba-subscription-consumer-events_1 | 2021-06-18 03:26:22,003 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 0} snuba-subscription-consumer-transactions_1 | %3|1623986755.528|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 22ms in state CONNECT) snuba-subscription-consumer-transactions_1 | %3|1623986756.485|FAIL|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 4ms in state CONNECT) snuba-subscription-consumer-transactions_1 | %3|1623986756.499|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed) snuba-subscription-consumer-transactions_1 | %3|1623986757.481|FAIL|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed) snuba-subscription-consumer-transactions_1 | 2021-06-18 03:26:11,475 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 0} snuba-transactions-cleanup_1 | SHELL=/bin/bash snuba-transactions-cleanup_1 | BASH_ENV=/container.env snuba-transactions-cleanup_1 | */5 * * * * gosu snuba snuba cleanup --storage transactions --dry-run False > /proc/1/fd/1 2>/proc/1/fd/2 snuba-transactions-cleanup_1 | 2021-06-18 03:30:03,924 Dropped 0 partitions on clickhouse:9000 snuba-transactions-cleanup_1 | 2021-06-18 03:35:03,073 Dropped 0 partitions on clickhouse:9000 snuba-transactions-cleanup_1 | 2021-06-18 03:40:03,282 Dropped 0 partitions on clickhouse:9000 snuba-transactions-consumer_1 | %3|1623986750.890|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 18ms in state CONNECT) snuba-transactions-consumer_1 | %3|1623986750.899|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 22ms in state CONNECT) snuba-transactions-consumer_1 | %3|1623986751.871|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed) snuba-transactions-consumer_1 | %3|1623986751.875|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.22.0.3:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed) snuba-transactions-consumer_1 | 2021-06-18 03:26:03,969 Caught ConsumerError('KafkaError{code=COORDINATOR_LOAD_IN_PROGRESS,val=14,str="JoinGroup failed: Broker: Coordinator load in progress"}'), shutting down... snuba-transactions-consumer_1 | Traceback (most recent call last): snuba-transactions-consumer_1 | File "/usr/local/bin/snuba", line 33, in <module> snuba-transactions-consumer_1 | sys.exit(load_entry_point('snuba', 'console_scripts', 'snuba')()) snuba-transactions-consumer_1 | File "/usr/local/lib/python3.8/site-packages/click/core.py", line 829, in __call__ snuba-transactions-consumer_1 | return self.main(*args, **kwargs) snuba-transactions-consumer_1 | File "/usr/local/lib/python3.8/site-packages/click/core.py", line 782, in main snuba-transactions-consumer_1 | rv = self.invoke(ctx) snuba-transactions-consumer_1 | File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1259, in invoke snuba-transactions-consumer_1 | return _process_result(sub_ctx.command.invoke(sub_ctx)) snuba-transactions-consumer_1 | File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1066, in invoke snuba-transactions-consumer_1 | return ctx.invoke(self.callback, **ctx.params) snuba-transactions-consumer_1 | File "/usr/local/lib/python3.8/site-packages/click/core.py", line 610, in invoke snuba-transactions-consumer_1 | return callback(*args, **kwargs) snuba-transactions-consumer_1 | File "/usr/src/snuba/snuba/cli/consumer.py", line 172, in consumer snuba-transactions-consumer_1 | consumer.run() snuba-transactions-consumer_1 | File "/usr/src/snuba/arroyo/processing/processor.py", line 108, in run snuba-transactions-consumer_1 | self._run_once() snuba-transactions-consumer_1 | File "/usr/src/snuba/arroyo/processing/processor.py", line 138, in _run_once snuba-transactions-consumer_1 | self.__message = self.__consumer.poll(timeout=1.0) snuba-transactions-consumer_1 | File "/usr/src/snuba/snuba/utils/streams/kafka_consumer_with_commit_log.py", line 28, in poll snuba-transactions-consumer_1 | return super().poll(timeout) snuba-transactions-consumer_1 | File "/usr/src/snuba/arroyo/backends/kafka/consumer.py", line 393, in poll snuba-transactions-consumer_1 | raise ConsumerError(str(error)) snuba-transactions-consumer_1 | arroyo.errors.ConsumerError: KafkaError{code=COORDINATOR_LOAD_IN_PROGRESS,val=14,str="JoinGroup failed: Broker: Coordinator load in progress"} snuba-transactions-consumer_1 | 2021-06-18 03:26:17,861 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 0} subscription-consumer-events_1 | 03:26:12 [INFO] sentry.plugins.github: apps-not-configured subscription-consumer-events_1 | 03:26:16 [INFO] sentry.snuba.query_subscription_consumer: query-subscription-consumer.on_assign (offsets='{0: None}' partitions='[TopicPartition{topic=events-subscription-results,partition=0,offset=-1001,error=None}]') subscription-consumer-transactions_1 | 03:26:13 [INFO] sentry.plugins.github: apps-not-configured subscription-consumer-transactions_1 | 03:26:16 [INFO] sentry.snuba.query_subscription_consumer: query-subscription-consumer.on_assign (offsets='{0: None}' partitions='[TopicPartition{topic=transactions-subscription-results,partition=0,offset=-1001,error=None}]') symbolicator-cleanup_1 | SHELL=/bin/bash symbolicator-cleanup_1 | BASH_ENV=/container.env symbolicator-cleanup_1 | 55 23 * * * gosu symbolicator symbolicator cleanup > /proc/1/fd/1 2>/proc/1/fd/2 web_1 | 03:26:13 [INFO] sentry.plugins.github: apps-not-configured web_1 | *** Starting uWSGI 2.0.19.1 (64bit) on [Fri Jun 18 03:26:14 2021] *** web_1 | compiled with version: 8.3.0 on 17 June 2021 13:53:15 web_1 | os: Linux-3.10.0-123.el7.x86_64 #1 SMP Mon Jun 30 12:09:22 UTC 2014 web_1 | nodename: cc092a8c3f45 web_1 | machine: x86_64 web_1 | clock source: unix web_1 | detected number of CPU cores: 4 web_1 | current working directory: / web_1 | detected binary path: /usr/local/bin/uwsgi web_1 | !!! no internal routing support, rebuild with pcre support !!! web_1 | your memory page size is 4096 bytes web_1 | detected max file descriptor number: 1048576 web_1 | lock engine: pthread robust mutexes web_1 | thunder lock: enabled web_1 | uWSGI http bound on 0.0.0.0:9000 fd 5 web_1 | uwsgi socket 0 bound to TCP address 127.0.0.1:35159 (port auto-assigned) fd 3 web_1 | Python version: 3.6.13 (default, May 12 2021, 16:48:24) [GCC 8.3.0] web_1 | Set PythonHome to /usr/local web_1 | Python main interpreter initialized at 0x7ff88b322fa0 web_1 | python threads support enabled web_1 | your server socket listen backlog is limited to 100 connections web_1 | your mercy for graceful operations on workers is 60 seconds web_1 | setting request body buffering size to 65536 bytes web_1 | mapped 1924224 bytes (1879 KB) for 12 cores web_1 | *** Operational MODE: preforking+threaded *** web_1 | spawned uWSGI master process (pid: 21) web_1 | spawned uWSGI worker 1 (pid: 25, cores: 4) web_1 | spawned uWSGI worker 2 (pid: 26, cores: 4) web_1 | spawned uWSGI worker 3 (pid: 27, cores: 4) web_1 | spawned uWSGI http 1 (pid: 28) web_1 | 03:26:21 [INFO] sentry.plugins.github: apps-not-configured web_1 | 03:26:21 [INFO] sentry.plugins.github: apps-not-configured web_1 | 03:26:21 [INFO] sentry.plugins.github: apps-not-configured web_1 | WSGI app 0 (mountpoint='') ready in 7 seconds on interpreter 0x7ff88b322fa0 pid: 27 (default app) web_1 | WSGI app 0 (mountpoint='') ready in 7 seconds on interpreter 0x7ff88b322fa0 pid: 25 (default app) web_1 | WSGI app 0 (mountpoint='') ready in 7 seconds on interpreter 0x7ff88b322fa0 pid: 26 (default app) zookeeper_1 | ===> ENV Variables ... zookeeper_1 | ALLOW_UNSIGNED=false zookeeper_1 | COMPONENT=zookeeper zookeeper_1 | CONFLUENT_DEB_VERSION=1 zookeeper_1 | CONFLUENT_PLATFORM_LABEL= zookeeper_1 | CONFLUENT_SUPPORT_METRICS_ENABLE=false zookeeper_1 | CONFLUENT_VERSION=5.5.0 zookeeper_1 | CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar zookeeper_1 | HOME=/root zookeeper_1 | HOSTNAME=ce29841fa696 zookeeper_1 | KAFKA_OPTS=-Dzookeeper.4lw.commands.whitelist=ruok zookeeper_1 | KAFKA_VERSION= zookeeper_1 | LANG=C.UTF-8 zookeeper_1 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin zookeeper_1 | PWD=/ zookeeper_1 | PYTHON_PIP_VERSION=8.1.2 zookeeper_1 | PYTHON_VERSION=2.7.9-1 zookeeper_1 | SCALA_VERSION=2.12 zookeeper_1 | SHLVL=1 zookeeper_1 | ZOOKEEPER_CLIENT_PORT=2181 zookeeper_1 | ZOOKEEPER_LOG4J_ROOT_LOGLEVEL=WARN zookeeper_1 | ZOOKEEPER_TOOLS_LOG4J_LOGLEVEL=WARN zookeeper_1 | ZULU_OPENJDK_VERSION=8=8.38.0.13 zookeeper_1 | _=/usr/bin/env zookeeper_1 | ===> User zookeeper_1 | uid=0(root) gid=0(root) groups=0(root) zookeeper_1 | ===> Configuring ... zookeeper_1 | ===> Running preflight checks ... zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper_1 | ===> Launching ... zookeeper_1 | ===> Launching zookeeper ... zookeeper_1 | [2021-06-18 03:25:43,985] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper_1 | [2021-06-18 03:25:44,932] WARN o.e.j.s.ServletContextHandler@4d95d2a2{/,null,UNAVAILABLE} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2021-06-18 03:25:44,933] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) worker_1 | 03:26:13 [INFO] sentry.plugins.github: apps-not-configured worker_1 | 03:26:13 [INFO] sentry.bgtasks: bgtask.spawn (task_name='sentry.bgtasks.clean_dsymcache:clean_dsymcache') worker_1 | 03:26:13 [INFO] sentry.bgtasks: bgtask.spawn (task_name='sentry.bgtasks.clean_releasefilecache:clean_releasefilecache') worker_1 | worker_1 | -------------- celery@27527180fc6d v4.4.7 (cliffs) worker_1 | --- ***** ----- worker_1 | -- ******* ---- Linux-3.10.0-123.el7.x86_64-x86_64-with-debian-10.9 2021-06-18 03:26:16 worker_1 | - *** --- * --- worker_1 | - ** ---------- [config] worker_1 | - ** ---------- .> app: sentry:0x7f7bb3fdf4e0 worker_1 | - ** ---------- .> transport: redis://redis:6379/0 worker_1 | - ** ---------- .> results: disabled:// worker_1 | - *** --- * --- .> concurrency: 4 (prefork) worker_1 | -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker) worker_1 | --- ***** ----- worker_1 | -------------- [queues] worker_1 | .> activity.notify exchange=default(direct) key=activity.notify worker_1 | .> alerts exchange=default(direct) key=alerts worker_1 | .> app_platform exchange=default(direct) key=app_platform worker_1 | .> assemble exchange=default(direct) key=assemble worker_1 | .> auth exchange=default(direct) key=auth worker_1 | .> buffers.process_pending exchange=default(direct) key=buffers.process_pending worker_1 | .> cleanup exchange=default(direct) key=cleanup worker_1 | .> commits exchange=default(direct) key=commits worker_1 | .> counters-0 exchange=counters(direct) key=default worker_1 | .> data_export exchange=default(direct) key=data_export worker_1 | .> default exchange=default(direct) key=default worker_1 | .> digests.delivery exchange=default(direct) key=digests.delivery worker_1 | .> digests.scheduling exchange=default(direct) key=digests.scheduling worker_1 | .> email exchange=default(direct) key=email worker_1 | .> events.preprocess_event exchange=default(direct) key=events.preprocess_event worker_1 | .> events.process_event exchange=default(direct) key=events.process_event worker_1 | .> events.reprocess_events exchange=default(direct) key=events.reprocess_events worker_1 | .> events.reprocessing.preprocess_event exchange=default(direct) key=events.reprocessing.preprocess_event worker_1 | .> events.reprocessing.process_event exchange=default(direct) key=events.reprocessing.process_event worker_1 | .> events.reprocessing.symbolicate_event exchange=default(direct) key=events.reprocessing.symbolicate_event worker_1 | .> events.save_event exchange=default(direct) key=events.save_event worker_1 | .> events.symbolicate_event exchange=default(direct) key=events.symbolicate_event worker_1 | .> files.delete exchange=default(direct) key=files.delete worker_1 | .> group_owners.process_suspect_commits exchange=default(direct) key=group_owners.process_suspect_commits worker_1 | .> incident_snapshots exchange=default(direct) key=incident_snapshots worker_1 | .> incidents exchange=default(direct) key=incidents worker_1 | .> integrations exchange=default(direct) key=integrations worker_1 | .> merge exchange=default(direct) key=merge worker_1 | .> options exchange=default(direct) key=options worker_1 | .> relay_config exchange=default(direct) key=relay_config worker_1 | .> reports.deliver exchange=default(direct) key=reports.deliver worker_1 | .> reports.prepare exchange=default(direct) key=reports.prepare worker_1 | .> search exchange=default(direct) key=search worker_1 | .> sleep exchange=default(direct) key=sleep worker_1 | .> stats exchange=default(direct) key=stats worker_1 | .> subscriptions exchange=default(direct) key=subscriptions worker_1 | .> triggers-0 exchange=triggers(direct) key=default worker_1 | .> unmerge exchange=default(direct) key=unmerge worker_1 | .> update exchange=default(direct) key=update worker_1 | worker_1 | 03:31:15 [WARNING] sentry.tasks.release_registry: Release registry URL is not specified, skipping the task. worker_1 | 03:36:15 [WARNING] sentry.tasks.release_registry: Release registry URL is not specified, skipping the task. worker_1 | 03:41:15 [WARNING] sentry.tasks.release_registry: Release registry URL is not specified, skipping the task. worker_1 | 03:41:15 [INFO] sentry.tasks.update_user_reports: update_user_reports.records_updated (reports_to_update=0 reports_with_event=0 updated_reports=0)
You cannot connect back to your public IP most of the times without some port forwarding. Have you tried just using http://127.0.0.1:9000/ instead?
Version Information
Docker version 20.10.7, build f0df350 docker-compose version 1.29.2, build 5becea4c centos 7.9
Steps to Reproduce
./install.sh docker-compose up -d
Expected Result
able to reach the login page at http://my ip:9000
Actual Result
Logs
What you saw along the way, e.g.:
latest install logs:
ls -1 sentry_install_log-*.txt | tail -1 | xargs cat
docker-compose logs
output`