torodb / stampede

The ToroDB solution to provide better analytics on top of MongoDB and make it easier to migrate from MongoDB to SQL
https://www.torodb.com/stampede/
GNU Affero General Public License v3.0
1.76k stars 118 forks source link

Docker-Compose ExecutorService java.util.concurrent.ScheduledThreadPoolExecutor@2eac3d64[Shutting down #202

Closed xet7 closed 7 years ago

xet7 commented 7 years ago

Hi, using Docker-Compose script from: https://github.com/wekan/wekan-postgresql

Dockerfile for that Wekan container is at: https://github.com/wekan/wekan/blob/devel/Dockerfile

Only change to docker-compose.yml is that Wekan address is changed from http://localhost to IP address like http://192.168.1.5

I used docker-compose.yml with:

docker-compose up

ToroDB Stampede Docker container does not stay running, it crashes and exits. Other containers (wekan, mongo and postgres) do continue running. It's not lack of RAM, I tested on computer that has total 32GB RAM with more than half of that free before starting.

There was no errors about fibers when building DockerHub wekanteam:latest container, I don't know why there's fibers error below.

$ docker-compose up
Creating network "wekanpostgresql_wekan-tier" with driver "bridge"
Creating volume "wekanpostgresql_mongodb" with local driver
Creating volume "wekanpostgresql_mongodb-dump" with local driver
Pulling mongodb (mongo:3.2)...
3.2: Pulling from library/mongo
5233d9aed181: Pull complete
5bbfc055e8fb: Pull complete
03e4cc4b6057: Pull complete
8319d631fd37: Pull complete
797ca64b920a: Pull complete
4f57a996ba49: Pull complete
5778b19a1103: Pull complete
a763733f623a: Pull complete
0101d9086c98: Pull complete
8b0a7b12275b: Pull complete
bfe8dd06ccf2: Pull complete
Digest: sha256:b2c7025b69223fca43a2c7d60c30b2bffac4df20314f11d2b46f4d8d4eaf29e9
Status: Downloaded newer image for mongo:3.2
Pulling wekan (wekanteam/wekan:latest)...
latest: Pulling from wekanteam/wekan
9f0706ba7422: Pull complete
027fea8c066a: Pull complete
c261542db470: Pull complete
Digest: sha256:ead2dba16e0b80b4ff570d28de2d966be2508386f7b4632b8928de6ee401abb4
Status: Downloaded newer image for wekanteam/wekan:latest
Pulling postgres (postgres:9.6)...
9.6: Pulling from library/postgres
ad74af05f5a2: Pull complete
8996b4a29b2b: Pull complete
bea3311ef15b: Pull complete
b1b9eb0ac9c8: Pull complete
1d6d551d6af0: Pull complete
ba16377760f9: Pull complete
fd68bfa82d98: Pull complete
f49f2decd34d: Pull complete
6b1468749943: Pull complete
29d82d6e2d6c: Pull complete
ad849322ee0c: Pull complete
c5539863a39f: Pull complete
18cc2b50256c: Pull complete
Digest: sha256:586320aba4a40f7c4ffdb69534f93c844f01c0ff1211c4b9d9f05a8bddca186f
Status: Downloaded newer image for postgres:9.6
Pulling torodb-stampede (torodb/stampede:latest)...
latest: Pulling from torodb/stampede
5040bd298390: Pull complete
fce5728aad85: Pull complete
c42794440453: Pull complete
0c0da797ba48: Pull complete
7c9b17433752: Pull complete
114e02586e63: Pull complete
e4c663802e9a: Pull complete
0490ebe4175e: Pull complete
44f1d76d0958: Pull complete
ab29f21dee7e: Pull complete
c91455792d73: Pull complete
Digest: sha256:ff04de456602ecd01347bb836565da01e03d4f30d62078286951cea8242667ed
Status: Downloaded newer image for torodb/stampede:latest
Creating wekanpostgresql_mongodb_1 ... 
Creating wekanpostgresql_postgres_1 ... 
Creating wekanpostgresql_mongodb_1
Creating wekanpostgresql_mongodb_1 ... done
Creating wekan-app ... 
Creating wekanpostgresql_torodb-stampede_1 ... 
Creating wekan-app
Creating wekanpostgresql_torodb-stampede_1 ... done
Attaching to wekanpostgresql_postgres_1, wekanpostgresql_mongodb_1, wekan-app, wekanpostgresql_torodb-stampede_1
postgres_1         | The files belonging to this database system will be owned by user "postgres".
postgres_1         | This user must also own the server process.
postgres_1         | 
postgres_1         | The database cluster will be initialized with locale "en_US.utf8".
postgres_1         | The default database encoding has accordingly been set to "UTF8".
postgres_1         | The default text search configuration will be set to "english".
postgres_1         | 
postgres_1         | Data page checksums are disabled.
postgres_1         | 
postgres_1         | fixing permissions on existing directory /var/lib/postgresql/data ... ok
postgres_1         | creating subdirectories ... ok
postgres_1         | selecting default max_connections ... 100
postgres_1         | selecting default shared_buffers ... 128MB
postgres_1         | selecting dynamic shared memory implementation ... posix
postgres_1         | creating configuration files ... ok
mongodb_1          | 2017-08-16T22:50:23.726+0000 I CONTROL  [initandlisten] MongoDB starting : pid=6 port=27017 dbpath=/data/db 64-bit host=583fc0025955
mongodb_1          | 2017-08-16T22:50:23.727+0000 I CONTROL  [initandlisten] db version v3.2.16
mongodb_1          | 2017-08-16T22:50:23.727+0000 I CONTROL  [initandlisten] git version: 056bf45128114e44c5358c7a8776fb582363e094
mongodb_1          | 2017-08-16T22:50:23.727+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.1t  3 May 2016
mongodb_1          | 2017-08-16T22:50:23.727+0000 I CONTROL  [initandlisten] allocator: tcmalloc
mongodb_1          | 2017-08-16T22:50:23.727+0000 I CONTROL  [initandlisten] modules: none
mongodb_1          | 2017-08-16T22:50:23.727+0000 I CONTROL  [initandlisten] build environment:
mongodb_1          | 2017-08-16T22:50:23.727+0000 I CONTROL  [initandlisten]     distmod: debian81
mongodb_1          | 2017-08-16T22:50:23.727+0000 I CONTROL  [initandlisten]     distarch: x86_64
mongodb_1          | 2017-08-16T22:50:23.728+0000 I CONTROL  [initandlisten]     target_arch: x86_64
mongodb_1          | 2017-08-16T22:50:23.728+0000 I CONTROL  [initandlisten] options: { replication: { replSet: "rs1" } }
mongodb_1          | 2017-08-16T22:50:23.872+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=1G,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
mongodb_1          | 2017-08-16T22:50:24.445+0000 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
mongodb_1          | 2017-08-16T22:50:24.445+0000 I CONTROL  [initandlisten] 
mongodb_1          | 2017-08-16T22:50:24.445+0000 I CONTROL  [initandlisten] 
mongodb_1          | 2017-08-16T22:50:24.445+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
mongodb_1          | 2017-08-16T22:50:24.445+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
mongodb_1          | 2017-08-16T22:50:24.446+0000 I CONTROL  [initandlisten] 
mongodb_1          | 2017-08-16T22:50:24.446+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
mongodb_1          | 2017-08-16T22:50:24.446+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
mongodb_1          | 2017-08-16T22:50:24.446+0000 I CONTROL  [initandlisten] 
postgres_1         | running bootstrap script ... ok
mongodb_1          | 2017-08-16T22:50:24.955+0000 I REPL     [initandlisten] Did not find local voted for document at startup.
mongodb_1          | 2017-08-16T22:50:24.955+0000 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument: Did not find replica set configuration document in local.system.replset
mongodb_1          | 2017-08-16T22:50:24.955+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
mongodb_1          | 2017-08-16T22:50:24.956+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
mongodb_1          | 2017-08-16T22:50:24.956+0000 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
postgres_1         | performing post-bootstrap initialization ... ok
postgres_1         | syncing data to disk ... ok
postgres_1         | 
postgres_1         | Success. You can now start the database server using:
postgres_1         | 
postgres_1         |     pg_ctl -D /var/lib/postgresql/data -l logfile start
postgres_1         | 
postgres_1         | 
postgres_1         | WARNING: enabling "trust" authentication for local connections
postgres_1         | You can change this by editing pg_hba.conf or using the option -A, or
postgres_1         | --auth-local and --auth-host, the next time you run initdb.
postgres_1         | ****************************************************
postgres_1         | WARNING: No password has been set for the database.
postgres_1         |          This will allow anyone with access to the
postgres_1         |          Postgres port to access your database. In
postgres_1         |          Docker's default configuration, this is
postgres_1         |          effectively any other container on the same
postgres_1         |          system.
postgres_1         | 
postgres_1         |          Use "-e POSTGRES_PASSWORD=password" to set
postgres_1         |          it in "docker run".
postgres_1         | ****************************************************
postgres_1         | waiting for server to start....LOG:  could not bind IPv6 socket: Cannot assign requested address
postgres_1         | HINT:  Is another postmaster already running on port 5432? If not, wait a few seconds and retry.
postgres_1         | LOG:  database system was shut down at 2017-08-16 22:50:26 UTC
postgres_1         | LOG:  MultiXact member wraparound protections are now enabled
postgres_1         | LOG:  database system is ready to accept connections
postgres_1         | LOG:  autovacuum launcher started
mongodb_1          | 2017-08-16T22:50:27.308+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.2:51956 #1 (1 connection now open)
mongodb_1          | 2017-08-16T22:50:27.480+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55280 #2 (2 connections now open)
wekan-app          | 
wekan-app          | /build/programs/server/node_modules/fibers/future.js:313
wekan-app          |                        throw(ex);
wekan-app          |                        ^
wekan-app          | MongoError: not master and slaveOk=false
wekan-app          |     at Object.Future.wait (/build/programs/server/node_modules/fibers/future.js:449:15)
wekan-app          |     at [object Object].MongoConnection._ensureIndex (packages/mongo/mongo_driver.js:832:10)
wekan-app          |     at [object Object].Mongo.Collection._ensureIndex (packages/mongo/collection.js:677:20)
wekan-app          |     at setupUsersCollection (packages/accounts-base/accounts_server.js:1493:9)
wekan-app          |     at new AccountsServer (packages/accounts-base/accounts_server.js:51:5)
wekan-app          |     at meteorInstall.node_modules.meteor.accounts-base.server_main.js (packages/accounts-base/server_main.js:9:12)
wekan-app          |     at fileEvaluate (packages/modules-runtime.js:197:9)
wekan-app          |     at require (packages/modules-runtime.js:120:16)
wekan-app          |     at /build/programs/server/packages/accounts-base.js:2031:15
wekan-app          |     at /build/programs/server/packages/accounts-base.js:2042:3
wekan-app          |     - - - - -
wekan-app          |     at Function.MongoError.create (/build/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/error.js:31:11)
wekan-app          |     at queryCallback (/build/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/cursor.js:212:36)
wekan-app          |     at /build/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/pool.js:455:18
wekan-app          |     at nextTickCallbackWith0Args (node.js:489:9)
wekan-app          |     at process._tickCallback (node.js:418:13)
mongodb_1          | 2017-08-16T22:50:27.898+0000 I NETWORK  [conn2] end connection 172.18.0.4:55280 (1 connection now open)
postgres_1         |  done
postgres_1         | server started
postgres_1         | ALTER ROLE
postgres_1         | 
postgres_1         | 
postgres_1         | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
postgres_1         | 
postgres_1         | LOG:  received fast shutdown request
postgres_1         | LOG:  aborting any active transactions
postgres_1         | LOG:  autovacuum launcher shutting down
postgres_1         | LOG:  shutting down
postgres_1         | waiting for server to shut down....LOG:  database system is shut down
postgres_1         |  done
postgres_1         | server stopped
postgres_1         | 
postgres_1         | PostgreSQL init process complete; ready for start up.
postgres_1         | 
postgres_1         | LOG:  database system was shut down at 2017-08-16 22:50:28 UTC
postgres_1         | LOG:  MultiXact member wraparound protections are now enabled
postgres_1         | LOG:  database system is ready to accept connections
postgres_1         | LOG:  autovacuum launcher started
postgres_1         | FATAL:  role "wekan" does not exist
mongodb_1          | 2017-08-16T22:50:30.312+0000 I REPL     [conn1] replSetInitiate admin command received from client
mongodb_1          | 2017-08-16T22:50:30.314+0000 I REPL     [conn1] replSetInitiate config object with 1 members parses ok
mongodb_1          | 2017-08-16T22:50:30.314+0000 I REPL     [conn1] ******
mongodb_1          | 2017-08-16T22:50:30.314+0000 I REPL     [conn1] creating replication oplog of size: 22945MB...
mongodb_1          | 2017-08-16T22:50:30.321+0000 I STORAGE  [conn1] Starting WiredTigerRecordStoreThread local.oplog.rs
mongodb_1          | 2017-08-16T22:50:30.328+0000 I STORAGE  [conn1] The size storer reports that the oplog contains 0 records totaling to 0 bytes
mongodb_1          | 2017-08-16T22:50:30.328+0000 I STORAGE  [conn1] Scanning the oplog to determine where to place markers for truncation
mongodb_1          | 2017-08-16T22:50:30.363+0000 I REPL     [conn1] ******
mongodb_1          | 2017-08-16T22:50:30.427+0000 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "rs1", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "mongodb:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5994cc3613c864f461dfa9fb') } }
mongodb_1          | 2017-08-16T22:50:30.432+0000 I REPL     [ReplicationExecutor] This node is mongodb:27017 in the config
mongodb_1          | 2017-08-16T22:50:30.432+0000 I REPL     [ReplicationExecutor] transition to STARTUP2
mongodb_1          | 2017-08-16T22:50:30.432+0000 I REPL     [conn1] Starting replication applier threads
mongodb_1          | 2017-08-16T22:50:30.433+0000 I COMMAND  [conn1] command local.replset.minvalid command: replSetInitiate { replSetInitiate: { _id: "rs1", members: [ { _id: 0.0, host: "mongodb:27017" } ] } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:22 locks:{ Global: { acquireCount: { r: 7, w: 5, W: 2 } }, Database: { acquireCount: { w: 2, W: 3 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 120ms
mongodb_1          | 2017-08-16T22:50:30.482+0000 I NETWORK  [conn1] end connection 172.18.0.2:51956 (0 connections now open)
mongodb_1          | 2017-08-16T22:50:30.482+0000 I REPL     [ReplicationExecutor] transition to RECOVERING
mongodb_1          | 2017-08-16T22:50:30.483+0000 I REPL     [ReplicationExecutor] transition to SECONDARY
mongodb_1          | 2017-08-16T22:50:30.502+0000 I REPL     [ReplicationExecutor] conducting a dry run election to see if we could be elected
mongodb_1          | 2017-08-16T22:50:30.503+0000 I REPL     [ReplicationExecutor] dry election run succeeded, running for election
mongodb_1          | 2017-08-16T22:50:30.598+0000 I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 1
mongodb_1          | 2017-08-16T22:50:30.598+0000 I REPL     [ReplicationExecutor] transition to PRIMARY
mongodb_1          | 2017-08-16T22:50:31.492+0000 I REPL     [rsSync] transition to primary complete; database writes are now permitted
mongodb_1          | 2017-08-16T22:50:32.717+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55286 #3 (1 connection now open)
mongodb_1          | 2017-08-16T22:50:32.907+0000 I INDEX    [conn3] build index on: wekan.users properties: { v: 1, unique: true, key: { username: 1 }, name: "username_1", ns: "wekan.users", sparse: 1 }
mongodb_1          | 2017-08-16T22:50:32.907+0000 I INDEX    [conn3]     building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:32.913+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:32.938+0000 I INDEX    [conn3] build index on: wekan.users properties: { v: 1, unique: true, key: { emails.address: 1 }, name: "emails.address_1", ns: "wekan.users", sparse: 1 }
mongodb_1          | 2017-08-16T22:50:32.939+0000 I INDEX    [conn3]     building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:32.940+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:32.966+0000 I INDEX    [conn3] build index on: wekan.users properties: { v: 1, unique: true, key: { services.resume.loginTokens.hashedToken: 1 }, name: "services.resume.loginTokens.hashedToken_1", ns: "wekan.users", sparse: 1 }
mongodb_1          | 2017-08-16T22:50:32.967+0000 I INDEX    [conn3]     building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:32.973+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:32.995+0000 I INDEX    [conn3] build index on: wekan.users properties: { v: 1, unique: true, key: { services.resume.loginTokens.token: 1 }, name: "services.resume.loginTokens.token_1", ns: "wekan.users", sparse: 1 }
mongodb_1          | 2017-08-16T22:50:32.995+0000 I INDEX    [conn3]     building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:32.996+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:33.095+0000 I INDEX    [conn3] build index on: wekan.users properties: { v: 1, key: { services.resume.haveLoginTokensToDelete: 1 }, name: "services.resume.haveLoginTokensToDelete_1", ns: "wekan.users", sparse: 1 }
mongodb_1          | 2017-08-16T22:50:33.098+0000 I INDEX    [conn3]     building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:33.102+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:33.121+0000 I INDEX    [conn3] build index on: wekan.users properties: { v: 1, key: { services.resume.loginTokens.when: 1 }, name: "services.resume.loginTokens.when_1", ns: "wekan.users", sparse: 1 }
mongodb_1          | 2017-08-16T22:50:33.126+0000 I INDEX    [conn3]     building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:33.142+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:34.821+0000 I INDEX    [conn3] build index on: wekan.users properties: { v: 1, unique: true, key: { services.email.verificationTokens.token: 1 }, name: "services.email.verificationTokens.token_1", ns: "wekan.users", sparse: 1 }
mongodb_1          | 2017-08-16T22:50:34.822+0000 I INDEX    [conn3]     building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:34.832+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:34.856+0000 I INDEX    [conn3] build index on: wekan.users properties: { v: 1, unique: true, key: { services.password.reset.token: 1 }, name: "services.password.reset.token_1", ns: "wekan.users", sparse: 1 }
mongodb_1          | 2017-08-16T22:50:34.856+0000 I INDEX    [conn3]     building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:34.858+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:34.885+0000 I INDEX    [conn3] build index on: wekan.users properties: { v: 1, key: { services.password.reset.when: 1 }, name: "services.password.reset.when_1", ns: "wekan.users", sparse: 1 }
mongodb_1          | 2017-08-16T22:50:34.885+0000 I INDEX    [conn3]     building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:34.886+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:34.942+0000 I INDEX    [conn3] build index on: wekan.meteor_accounts_loginServiceConfiguration properties: { v: 1, unique: true, key: { service: 1 }, name: "service_1", ns: "wekan.meteor_accounts_loginServiceConfiguration" }
mongodb_1          | 2017-08-16T22:50:34.942+0000 I INDEX    [conn3]     building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:34.944+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:37.743+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55288 #4 (2 connections now open)
mongodb_1          | 2017-08-16T22:50:37.814+0000 I NETWORK  [conn4] end connection 172.18.0.4:55288 (1 connection now open)
mongodb_1          | 2017-08-16T22:50:37.881+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55290 #5 (2 connections now open)
mongodb_1          | 2017-08-16T22:50:37.882+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55292 #6 (3 connections now open)
mongodb_1          | 2017-08-16T22:50:37.923+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55294 #7 (4 connections now open)
mongodb_1          | 2017-08-16T22:50:37.980+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55296 #8 (5 connections now open)
mongodb_1          | 2017-08-16T22:50:37.985+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55298 #9 (6 connections now open)
mongodb_1          | 2017-08-16T22:50:38.006+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55300 #10 (7 connections now open)
mongodb_1          | 2017-08-16T22:50:38.098+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55302 #11 (8 connections now open)
mongodb_1          | 2017-08-16T22:50:38.098+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55304 #12 (9 connections now open)
mongodb_1          | 2017-08-16T22:50:38.152+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55306 #13 (10 connections now open)
mongodb_1          | 2017-08-16T22:50:38.168+0000 I NETWORK  [conn13] end connection 172.18.0.4:55306 (9 connections now open)
mongodb_1          | 2017-08-16T22:50:38.192+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55308 #14 (10 connections now open)
mongodb_1          | 2017-08-16T22:50:38.195+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55310 #15 (11 connections now open)
mongodb_1          | 2017-08-16T22:50:38.203+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55312 #16 (12 connections now open)
mongodb_1          | 2017-08-16T22:50:38.226+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55314 #17 (13 connections now open)
mongodb_1          | 2017-08-16T22:50:38.227+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55316 #18 (14 connections now open)
mongodb_1          | 2017-08-16T22:50:38.282+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55318 #19 (15 connections now open)
mongodb_1          | 2017-08-16T22:50:38.283+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55320 #20 (16 connections now open)
mongodb_1          | 2017-08-16T22:50:38.977+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55322 #21 (17 connections now open)
mongodb_1          | 2017-08-16T22:50:39.309+0000 I INDEX    [conn3] build index on: wekan.activities properties: { v: 1, key: { createdAt: -1 }, name: "createdAt_-1", ns: "wekan.activities" }
mongodb_1          | 2017-08-16T22:50:39.309+0000 I INDEX    [conn3]     building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:39.311+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:39.327+0000 I INDEX    [conn21] build index on: wekan.activities properties: { v: 1, key: { cardId: 1, createdAt: -1 }, name: "cardId_1_createdAt_-1", ns: "wekan.activities" }
mongodb_1          | 2017-08-16T22:50:39.328+0000 I INDEX    [conn21]    building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:39.329+0000 I INDEX    [conn21] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:39.352+0000 I INDEX    [conn5] build index on: wekan.activities properties: { v: 1, key: { boardId: 1, createdAt: -1 }, name: "boardId_1_createdAt_-1", ns: "wekan.activities" }
mongodb_1          | 2017-08-16T22:50:39.352+0000 I INDEX    [conn5]     building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:39.362+0000 I INDEX    [conn5] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:39.387+0000 I INDEX    [conn3] build index on: wekan.activities properties: { v: 1, key: { commentId: 1 }, name: "commentId_1", ns: "wekan.activities", partialFilterExpression: { commentId: { $exists: true } } }
mongodb_1          | 2017-08-16T22:50:39.388+0000 I INDEX    [conn3]     building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:39.394+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:39.415+0000 I INDEX    [conn21] build index on: wekan.activities properties: { v: 1, key: { attachmentId: 1 }, name: "attachmentId_1", ns: "wekan.activities", partialFilterExpression: { attachmentId: { $exists: true } } }
mongodb_1          | 2017-08-16T22:50:39.415+0000 I INDEX    [conn21]    building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:39.417+0000 I INDEX    [conn21] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:39.496+0000 I INDEX    [conn5] build index on: wekan.boards properties: { v: 1, unique: true, key: { _id: 1, members.userId: 1 }, name: "_id_1_members.userId_1", ns: "wekan.boards" }
mongodb_1          | 2017-08-16T22:50:39.496+0000 I INDEX    [conn5]     building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:39.498+0000 I INDEX    [conn5] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:39.525+0000 I INDEX    [conn3] build index on: wekan.boards properties: { v: 1, key: { members.userId: 1 }, name: "members.userId_1", ns: "wekan.boards" }
mongodb_1          | 2017-08-16T22:50:39.525+0000 I INDEX    [conn3]     building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:39.527+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:39.562+0000 I INDEX    [conn21] build index on: wekan.card_comments properties: { v: 1, key: { cardId: 1, createdAt: -1 }, name: "cardId_1_createdAt_-1", ns: "wekan.card_comments" }
mongodb_1          | 2017-08-16T22:50:39.565+0000 I INDEX    [conn21]    building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:39.567+0000 I INDEX    [conn21] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:39.610+0000 I INDEX    [conn5] build index on: wekan.cards properties: { v: 1, key: { boardId: 1, createdAt: -1 }, name: "boardId_1_createdAt_-1", ns: "wekan.cards" }
mongodb_1          | 2017-08-16T22:50:39.611+0000 I INDEX    [conn5]     building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:39.615+0000 I INDEX    [conn5] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:39.668+0000 I INDEX    [conn3] build index on: wekan.checklists properties: { v: 1, key: { cardId: 1, createdAt: 1 }, name: "cardId_1_createdAt_1", ns: "wekan.checklists" }
mongodb_1          | 2017-08-16T22:50:39.668+0000 I INDEX    [conn3]     building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:39.669+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:39.709+0000 I INDEX    [conn21] build index on: wekan.invitation_codes properties: { v: 1, unique: true, key: { email: 1 }, name: "c2_email", ns: "wekan.invitation_codes", background: true, sparse: false }
mongodb_1          | 2017-08-16T22:50:39.709+0000 I INDEX    [conn21] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:39.763+0000 I INDEX    [conn5] build index on: wekan.lists properties: { v: 1, key: { boardId: 1 }, name: "boardId_1", ns: "wekan.lists" }
mongodb_1          | 2017-08-16T22:50:39.764+0000 I INDEX    [conn5]     building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:39.766+0000 I INDEX    [conn5] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:39.893+0000 I INDEX    [conn3] build index on: wekan.unsaved-edits properties: { v: 1, key: { userId: 1 }, name: "userId_1", ns: "wekan.unsaved-edits" }
mongodb_1          | 2017-08-16T22:50:39.893+0000 I INDEX    [conn3]     building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:39.894+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
torodb-stampede_1  | Creating entry for user wekan in /root/.toropass
torodb-stampede_1  | Creating wekan user
torodb-stampede_1  | CREATE ROLE
torodb-stampede_1  | Creating wekan database
torodb-stampede_1  | CREATE DATABASE
torodb-stampede_1  | Writing configuration file to /maven/conf/torodb-stampede.yml
torodb-stampede_1  | 2017-08-16T10:50:51.452 INFO  LIFECYCLE  - Starting up ToroDB Stampede
torodb-stampede_1  | 2017-08-16T10:50:52.129 INFO  BACKEND    - Configured PostgreSQL backend at postgres:5432
torodb-stampede_1  | 2017-08-16T10:50:54.192 INFO  BACKEND    - Created pool session with size 28 and level TRANSACTION_REPEATABLE_READ
torodb-stampede_1  | 2017-08-16T10:50:54.348 INFO  BACKEND    - Created pool system with size 1 and level TRANSACTION_REPEATABLE_READ
torodb-stampede_1  | 2017-08-16T10:50:54.394 INFO  BACKEND    - Created pool cursors with size 1 and level TRANSACTION_REPEATABLE_READ
torodb-stampede_1  | 2017-08-16T10:50:57.585 INFO  BACKEND    - Schema 'torodb' not found. Creating it...
torodb-stampede_1  | 2017-08-16T10:50:57.784 INFO  BACKEND    - Schema 'torodb' created
torodb-stampede_1  | 2017-08-16T10:50:57.821 INFO  BACKEND    - Database metadata has been validated
torodb-stampede_1  | 2017-08-16T10:50:58.549 WARN  LIFECYCLE  - Found that replication shard unsharded is not consistent.
torodb-stampede_1  | 2017-08-16T10:50:58.549 WARN  LIFECYCLE  - Dropping user data.
torodb-stampede_1  | 2017-08-16T10:50:58.673 INFO  REPL-unsha - Consistent state has been set to 'false'
torodb-stampede_1  | 2017-08-16T10:50:59.196 INFO  LIFECYCLE  - Starting replication from replica set named rs1
torodb-stampede_1  | 2017-08-16T10:51:00.926 INFO  REPL       - Starting replication service
mongodb_1          | 2017-08-16T22:51:01.675+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.5:36708 #22 (18 connections now open)
mongodb_1          | 2017-08-16T22:51:01.682+0000 I NETWORK  [conn22] end connection 172.18.0.5:36708 (17 connections now open)
mongodb_1          | 2017-08-16T22:51:01.930+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.5:36710 #23 (18 connections now open)
mongodb_1          | 2017-08-16T22:51:02.097+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.5:36712 #24 (19 connections now open)
torodb-stampede_1  | 2017-08-16T10:51:02.334 INFO  REPL       - Waiting for 2  pings from other members before syncing
torodb-stampede_1  | 2017-08-16T10:51:02.344 INFO  REPL       - Member mongodb:27017 is now in state RS_PRIMARY
torodb-stampede_1  | 2017-08-16T10:51:03.340 INFO  REPL       - Waiting for 1  pings from other members before syncing
torodb-stampede_1  | 2017-08-16T10:51:04.341 INFO  REPL       - Waiting for 1  pings from other members before syncing
torodb-stampede_1  | 2017-08-16T10:51:05.352 INFO  REPL       - syncing from: mongodb:27017
torodb-stampede_1  | 2017-08-16T10:51:05.353 INFO  REPL       - Topology service started
torodb-stampede_1  | 2017-08-16T10:51:05.543 INFO  REPL       - Database is not consistent.
torodb-stampede_1  | 2017-08-16T10:51:05.545 INFO  REPL       - Replication service started
torodb-stampede_1  | 2017-08-16T10:51:05.547 INFO  LIFECYCLE  - ToroDB Stampede is now running
torodb-stampede_1  | 2017-08-16T10:51:05.552 INFO  REPL       - Starting RECOVERY mode
torodb-stampede_1  | 2017-08-16T10:51:05.562 INFO  REPL       - Starting RECOVERY service
torodb-stampede_1  | 2017-08-16T10:51:05.564 INFO  REPL       - Starting initial sync
torodb-stampede_1  | 2017-08-16T10:51:05.607 INFO  REPL       - Consistent state has been set to 'false'
torodb-stampede_1  | 2017-08-16T10:51:05.645 INFO  REPL       - Using node mongodb:27017 to replicate from
torodb-stampede_1  | 2017-08-16T10:51:05.733 INFO  REPL       - Remote database cloning started
torodb-stampede_1  | 2017-08-16T10:51:06.200 INFO  BACKEND    - Created internal index rid_pkey for table oplog_replication_lastappliedoplogentry
torodb-stampede_1  | 2017-08-16T10:51:06.210 INFO  BACKEND    - Created internal index did_seq_idx for table oplog_replication_lastappliedoplogentry
torodb-stampede_1  | 2017-08-16T10:51:06.315 INFO  REPL       - Local databases dropping started
torodb-stampede_1  | 2017-08-16T10:51:06.396 INFO  REPL       - Local databases dropping finished
torodb-stampede_1  | 2017-08-16T10:51:06.397 INFO  REPL       - Remote database cloning started
torodb-stampede_1  | 2017-08-16T10:51:06.429 INFO  REPL       - Collection wekan.card_comments will be cloned
torodb-stampede_1  | 2017-08-16T10:51:06.430 INFO  REPL       - Collection wekan.invitation_codes will be cloned
torodb-stampede_1  | 2017-08-16T10:51:06.452 INFO  REPL       - Collection wekan.checklists will be cloned
torodb-stampede_1  | 2017-08-16T10:51:06.459 INFO  REPL       - Collection wekan.cards will be cloned
torodb-stampede_1  | 2017-08-16T10:51:06.460 INFO  REPL       - Collection wekan.lists will be cloned
torodb-stampede_1  | 2017-08-16T10:51:06.460 INFO  REPL       - Collection wekan.meteor-migrations will be cloned
torodb-stampede_1  | 2017-08-16T10:51:06.461 INFO  REPL       - Collection wekan.boards will be cloned
torodb-stampede_1  | 2017-08-16T10:51:06.461 INFO  REPL       - Collection wekan.users will be cloned
torodb-stampede_1  | 2017-08-16T10:51:06.462 INFO  REPL       - Collection wekan.meteor_accounts_loginServiceConfiguration will be cloned
torodb-stampede_1  | 2017-08-16T10:51:06.463 INFO  REPL       - Collection wekan.accountSettings will be cloned
torodb-stampede_1  | 2017-08-16T10:51:06.463 INFO  REPL       - Collection wekan.activities will be cloned
torodb-stampede_1  | 2017-08-16T10:51:06.463 INFO  REPL       - Collection wekan.settings will be cloned
torodb-stampede_1  | 2017-08-16T10:51:06.463 INFO  REPL       - Collection wekan.unsaved-edits will be cloned
torodb-stampede_1  | 2017-08-16T10:51:06.476 INFO  MONGOD     - Drop collection wekan.card_comments
torodb-stampede_1  | 2017-08-16T10:51:06.540 INFO  MONGOD     - Drop collection wekan.invitation_codes
torodb-stampede_1  | 2017-08-16T10:51:06.587 INFO  MONGOD     - Drop collection wekan.checklists
torodb-stampede_1  | 2017-08-16T10:51:06.641 INFO  MONGOD     - Drop collection wekan.cards
torodb-stampede_1  | 2017-08-16T10:51:06.692 INFO  MONGOD     - Drop collection wekan.lists
torodb-stampede_1  | 2017-08-16T10:51:06.754 INFO  MONGOD     - Drop collection wekan.meteor-migrations
torodb-stampede_1  | 2017-08-16T10:51:06.778 INFO  MONGOD     - Drop collection wekan.boards
torodb-stampede_1  | 2017-08-16T10:51:06.824 INFO  MONGOD     - Drop collection wekan.users
torodb-stampede_1  | 2017-08-16T10:51:06.906 INFO  MONGOD     - Drop collection wekan.meteor_accounts_loginServiceConfiguration
torodb-stampede_1  | 2017-08-16T10:51:06.935 INFO  MONGOD     - Drop collection wekan.accountSettings
torodb-stampede_1  | 2017-08-16T10:51:06.977 INFO  MONGOD     - Drop collection wekan.activities
torodb-stampede_1  | 2017-08-16T10:51:07.038 INFO  MONGOD     - Drop collection wekan.settings
torodb-stampede_1  | 2017-08-16T10:51:07.069 INFO  MONGOD     - Drop collection wekan.unsaved-edits
torodb-stampede_1  | 2017-08-16T10:51:07.108 INFO  REPL       - Cloning collection data wekan.card_comments into wekan.card_comments
torodb-stampede_1  | 2017-08-16T10:51:07.471 INFO  REPL       - 0 documents have been cloned to wekan.card_comments
torodb-stampede_1  | 2017-08-16T10:51:07.474 INFO  REPL       - Cloning collection data wekan.invitation_codes into wekan.invitation_codes
torodb-stampede_1  | 2017-08-16T10:51:07.499 INFO  REPL       - 0 documents have been cloned to wekan.invitation_codes
torodb-stampede_1  | 2017-08-16T10:51:07.502 INFO  REPL       - Cloning collection data wekan.checklists into wekan.checklists
torodb-stampede_1  | 2017-08-16T10:51:07.520 INFO  REPL       - 0 documents have been cloned to wekan.checklists
torodb-stampede_1  | 2017-08-16T10:51:07.524 INFO  REPL       - Cloning collection data wekan.cards into wekan.cards
torodb-stampede_1  | 2017-08-16T10:51:07.566 INFO  REPL       - 0 documents have been cloned to wekan.cards
torodb-stampede_1  | 2017-08-16T10:51:07.574 INFO  REPL       - Cloning collection data wekan.lists into wekan.lists
torodb-stampede_1  | 2017-08-16T10:51:07.621 INFO  REPL       - 0 documents have been cloned to wekan.lists
torodb-stampede_1  | 2017-08-16T10:51:07.623 INFO  REPL       - Cloning collection data wekan.meteor-migrations into wekan.meteor-migrations
torodb-stampede_1  | 2017-08-16T10:51:07.908 INFO  BACKEND    - Created index meteor_migrations__id_s_a_idx for table meteor_migrations associated to logical index wekan.meteor-migrations._id_
torodb-stampede_1  | 2017-08-16T10:51:07.992 INFO  REPL       - 7 documents have been cloned to wekan.meteor-migrations
torodb-stampede_1  | 2017-08-16T10:51:07.993 INFO  REPL       - Cloning collection data wekan.boards into wekan.boards
torodb-stampede_1  | 2017-08-16T10:51:08.045 INFO  REPL       - 0 documents have been cloned to wekan.boards
torodb-stampede_1  | 2017-08-16T10:51:08.046 INFO  REPL       - Cloning collection data wekan.users into wekan.users
torodb-stampede_1  | 2017-08-16T10:51:08.085 INFO  REPL       - 0 documents have been cloned to wekan.users
torodb-stampede_1  | 2017-08-16T10:51:08.086 INFO  REPL       - Cloning collection data wekan.meteor_accounts_loginServiceConfiguration into wekan.meteor_accounts_loginServiceConfiguration
torodb-stampede_1  | 2017-08-16T10:51:08.115 INFO  REPL       - 0 documents have been cloned to wekan.meteor_accounts_loginServiceConfiguration
torodb-stampede_1  | 2017-08-16T10:51:08.118 INFO  REPL       - Cloning collection data wekan.accountSettings into wekan.accountSettings
torodb-stampede_1  | 2017-08-16T10:51:08.239 INFO  BACKEND    - Created index accountsettings__id_s_a_idx for table accountsettings associated to logical index wekan.accountSettings._id_
torodb-stampede_1  | 2017-08-16T10:51:08.260 INFO  REPL       - 1 documents have been cloned to wekan.accountSettings
torodb-stampede_1  | 2017-08-16T10:51:08.261 INFO  REPL       - Cloning collection data wekan.activities into wekan.activities
torodb-stampede_1  | 2017-08-16T10:51:08.278 INFO  REPL       - 0 documents have been cloned to wekan.activities
torodb-stampede_1  | 2017-08-16T10:51:08.308 INFO  REPL       - Cloning collection data wekan.settings into wekan.settings
torodb-stampede_1  | 2017-08-16T10:51:08.403 INFO  BACKEND    - Created index settings__id_s_a_idx for table settings associated to logical index wekan.settings._id_
torodb-stampede_1  | 2017-08-16T10:51:08.448 INFO  BACKEND    - Created internal index rid_pkey for table settings_mailserver
torodb-stampede_1  | 2017-08-16T10:51:08.464 INFO  BACKEND    - Created internal index did_seq_idx for table settings_mailserver
torodb-stampede_1  | 2017-08-16T10:51:08.516 INFO  REPL       - 1 documents have been cloned to wekan.settings
torodb-stampede_1  | 2017-08-16T10:51:08.522 INFO  REPL       - Cloning collection data wekan.unsaved-edits into wekan.unsaved-edits
torodb-stampede_1  | 2017-08-16T10:51:08.543 INFO  REPL       - 0 documents have been cloned to wekan.unsaved-edits
torodb-stampede_1  | 2017-08-16T10:51:08.573 INFO  REPL       - Cloning collection indexes wekan.card_comments into wekan.card_comments
torodb-stampede_1  | 2017-08-16T10:51:08.589 INFO  REPL       - Index card_comments.wekan._id_ will be cloned
torodb-stampede_1  | 2017-08-16T10:51:08.607 INFO  REPL       - Index card_comments.wekan.cardId_1_createdAt_-1 will be cloned
torodb-stampede_1  | 2017-08-16T10:51:08.656 ERROR REPL       - Fatal error while starting recovery mode: Error while cloning indexes: null
torodb-stampede_1  | 2017-08-16T10:51:08.673 ERROR REPL       - Catched an error on the replication layer. Escalating it
torodb-stampede_1  | 2017-08-16T10:51:08.674 ERROR LIFECYCLE  - Error reported by replication supervisor. Stopping ToroDB Stampede
torodb-stampede_1  | 2017-08-16T10:51:08.684 INFO  REPL       - Recived a request to stop the recovering service
torodb-stampede_1  | 2017-08-16T10:51:08.685 INFO  LIFECYCLE  - Shutting down ToroDB Stampede
torodb-stampede_1  | 2017-08-16T10:51:08.734 INFO  REPL       - Shutting down replication service
torodb-stampede_1  | 2017-08-16T10:51:09.082 INFO  REPL       - Topology service shutted down
torodb-stampede_1  | 2017-08-16T10:51:09.100 INFO  REPL       - Replication service shutted down
mongodb_1          | 2017-08-16T22:51:09.096+0000 I NETWORK  [conn24] end connection 172.18.0.5:36712 (18 connections now open)
mongodb_1          | 2017-08-16T22:51:09.098+0000 I NETWORK  [conn23] end connection 172.18.0.5:36710 (17 connections now open)
torodb-stampede_1  | 2017-08-16T10:51:10.124 INFO  LIFECYCLE  - ExecutorService java.util.concurrent.ScheduledThreadPoolExecutor@2eac3d64[Shutting down, pool size = 1, active threads = 0, queued tasks = 1, completed tasks = 15] did not finished in PT1.001S
torodb-stampede_1  | 2017-08-16T10:51:10.449 INFO  LIFECYCLE  - ToroDB Stampede has been shutted down
wekanpostgresql_torodb-stampede_1 exited with code 0

When I tried Docker Hub wekanteam:latestdevel tag, it did complain that I should add POSTGRES_PASSWORD environment variable. I did add POSTGRES_PASSWORD=wekan to both ToroDB and PosgreSQL containers, but still I got the same errors, just with different hash after SheduledThreadPoolExecutor and completed tasks=13.

I have not tested yet with ToroDB snap version does this happen there too, Wekan snap is at https://github.com/wekan/wekan-snap and Wekan snap edge has newest Wekan.

teoincontatto commented 7 years ago

Hi @xet7,

Can you provide the version of ToroDB Stampede listed in the docker-compose.yml. If you could provide the docker-compose.yml content (maybe removing just passwords) so we can test this on our side would be great.

xet7 commented 7 years ago

@teoincontatto

Passwords are not secret, they are only used locally.

Older version using Wekan master branch, it complains about PosgreSQL password:

version: '2'
services:
  torodb-stampede:
    image: torodb/stampede
    networks:
      - wekan-tier
    links:
      - postgres
      - mongodb
    environment:
      - POSTGRES_PASSWORD
      - TORODB_SETUP=true
      - TORODB_SYNC_SOURCE=mongodb:27017
      - TORODB_BACKEND_HOST=postgres
      - TORODB_BACKEND_PORT=5432
      - TORODB_BACKEND_DATABASE=wekan
      - TORODB_BACKEND_USER=wekan
      - TORODB_BACKEND_PASSWORD=wekan
      - DEBUG
  postgres:
    image: postgres:9.6
    networks:
      - wekan-tier
    environment:
      - POSTGRES_PASSWORD
    ports:
      - "15432:5432"
  mongodb:
    image: mongo:3.2
    networks:
      - wekan-tier
    ports:
      - "28017:27017"
    entrypoint:
      - /bin/bash
      - "-c"
      - mongo --nodb --eval '
            var db; 
            while (!db) { 
                try { 
                  db = new Mongo("mongodb:27017").getDB("local"); 
                } catch(ex) {} 
                sleep(3000); 
            }; 
            rs.initiate({_id:"rs1",members:[{_id:0,host:"mongodb:27017"}]});
        ' 1>/dev/null 2>&1 & 
        mongod --replSet rs1
  wekan:
    image: wekanteam/wekan:latest
    container_name: wekan-app
    restart: always
    networks:
      - wekan-tier
    ports:
      - 80:80
    environment:
      - MONGO_URL=mongodb://mongodb:27017/wekan
      - ROOT_URL=http://192.168.1.5
    depends_on:
      - mongodb

volumes:
  mongodb:
    driver: local
  mongodb-dump:
    driver: local

networks:
  wekan-tier:
    driver: bridge

Newer version using Wekan devel branch, it has same error, I added PostgreSQL password:

version: '2'
services:
  torodb-stampede:
    image: torodb/stampede
    networks:
      - wekan-tier
    links:
      - postgres
      - mongodb
    environment:
      - POSTGRES_PASSWORD=wekan
      - TORODB_SETUP=true
      - TORODB_SYNC_SOURCE=mongodb:27017
      - TORODB_BACKEND_HOST=postgres
      - TORODB_BACKEND_PORT=5432
      - TORODB_BACKEND_DATABASE=wekan
      - TORODB_BACKEND_USER=wekan
      - TORODB_BACKEND_PASSWORD=wekan
      - DEBUG
  postgres:
    image: postgres:9.6
    networks:
      - wekan-tier
    environment:
      - POSTGRES_PASSWORD=wekan
    ports:
      - "15432:5432"
  mongodb:
    image: mongo:3.2
    networks:
      - wekan-tier
    ports:
      - "28017:27017"
    entrypoint:
      - /bin/bash
      - "-c"
      - mongo --nodb --eval '
            var db; 
            while (!db) { 
                try { 
                  db = new Mongo("mongodb:27017").getDB("local"); 
                } catch(ex) {} 
                sleep(3000); 
            }; 
            rs.initiate({_id:"rs1",members:[{_id:0,host:"mongodb:27017"}]});
        ' 1>/dev/null 2>&1 & 
        mongod --replSet rs1
  wekan:
    image: wekanteam/wekan:latestdevel
    container_name: wekan-app
    restart: always
    networks:
      - wekan-tier
    ports:
      - 80:80
    environment:
      - MONGO_URL=mongodb://mongodb:27017/wekan
      - ROOT_URL=http://192.168.1.5
    depends_on:
      - mongodb

volumes:
  mongodb:
    driver: local
  mongodb-dump:
    driver: local

networks:
  wekan-tier:
    driver: bridge
teoincontatto commented 7 years ago

I can reproduce the bug. It seems it is fixed in developemnt. Actually the error is due to an index that is not supported (compound indexes are not supported at the moment) but is not filtered as it should during recovery:

torodb-stampede_1  | 2017-08-17T03:57:05.651 INFO  REPL       - Index card_comments.wekan.cardId_1_createdAt_-1 will be cloned
torodb-stampede_1  | 2017-08-17T03:57:05.666 ERROR REPL       - Fatal error while starting recovery mode: Error while cloning indexes: null

A workaround to that is to use the development version of ToroDB Stampede changing the docker-compose.yml from:

...
  torodb-stampede:
    image: torodb/stampede
...

to:

...
  torodb-stampede:
    image: torodb/stampede:1.0.0-SNAPSHOT
...

Another workaround to the issue is to to use latest release version and configure filter on problematic indexes. Have a look at the documentation on how to achieve that:

https://www.torodb.com/stampede/docs/1.0.0-beta3/configuration/filtered-replication/

xet7 commented 7 years ago

Thanks! Easiest workaround for me is to use development version on docker-compose.yml.