Closed dwikyrestu closed 3 years ago
Same problem here. In nginx error.log there is:
2021/07/02 09:26:32 [error] 483#483: *35 open() "/var/www/bigbluebutton-default/join" failed (2: No such file or directory), client: 212.110.203.23, server: xxxx.com, request: "GET /join HTTP/1.1", host: "xxx.xxx.com"
same here from time to time:
bbb-web:
/var/log/bigbluebutton/bbb-web.log:2021-07-28T04:00:03.107Z WARN i.l.c.protocol.ConnectionWatchdog - Cannot reconnect: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /127.0.0.1:6379
/var/log/bigbluebutton/bbb-web.log:2021-07-28T04:00:03.107Z WARN i.l.c.protocol.ConnectionWatchdog - Cannot reconnect: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /127.0.0.1:6379
/var/log/bigbluebutton/bbb-web.log:2021-07-28T04:00:03.107Z WARN i.l.c.protocol.ConnectionWatchdog - Cannot reconnect: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /127.0.0.1:6379
/var/log/bigbluebutton/bbb-web.log:2021-07-28T04:00:03.107Z WARN i.l.c.protocol.ConnectionWatchdog - Cannot reconnect: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /127.0.0.1:6379
2021-07-28T05:44:56.194Z WARN o.b.api.HTML5LoadBalancingService - Did not find any instances of html5 process running
2021-07-28T05:44:56.208Z DEBUG o.b.web.controllers.ApiController - Existing conference found
2021-07-28T05:44:56.208Z DEBUG o.b.web.controllers.ApiController - Rendering as xml
2021-07-28T05:44:56.384Z DEBUG o.b.web.controllers.ApiController - ApiController#join
nginx:
2021/07/28 08:20:20 [error] 56119#56119: *6622 connect() failed (111: Connection refused) while connecting to upstream, client: x.x.x.16, server: bbb15.mydomain.tld, request: "GET /html5client/join?sessionToken=**anytoken** HTTP/1.1", upstream: "http://127.0.0.1:4105/html5client/join?sessionToken=**anytoken**", host: "bbb15.mydomain.tld", referrer: "https://bbb.mydomain.tld/"
2021/07/28 08:20:20 [error] 56119#56119: *6622 connect() failed (111: Connection refused) while connecting to upstream, client: x.x.x.16, server: bbb15.mydomain.tld, request: "GET /html5client/join?sessionToken=**anytoken** HTTP/1.1", upstream: "http://127.0.0.1:4106/html5client/join?sessionToken=**anytoken**", host: "bbb15.mydomain.tld", referrer: "https://bbb.mydomain.tld/"
2021/07/28 08:20:20 [error] 56119#56119: *6622 connect() failed (111: Connection refused) while connecting to upstream, client: x.x.x.16, server: bbb15.mydomain.tld, request: "GET /html5client/join?sessionToken=**anytoken** HTTP/1.1", upstream: "http://127.0.0.1:4107/html5client/join?sessionToken=**anytoken**", host: "bbb15.mydomain.tld", referrer: "https://bbb.mydomain.tld/"
2021/07/28 08:20:20 [error] 56119#56119: *6622 connect() failed (111: Connection refused) while connecting to upstream, client: x.x.x.16, server: bbb15.mydomain.tld, request: "GET /html5client/join?sessionToken=**anytoken** HTTP/1.1", upstream: "http://127.0.0.1:4102/html5client/join?sessionToken=**anytoken**", host: "bbb15.mydomain.tld", referrer: "https://bbb.mydomain.tld/"
2021/07/28 08:20:20 [error] 56119#56119: *6622 connect() failed (111: Connection refused) while connecting to upstream, client: x.x.x.16, server: bbb15.mydomain.tld, request: "GET /html5client/join?sessionToken=**anytoken** HTTP/1.1", upstream: "http://127.0.0.1:4103/html5client/join?sessionToken=**anytoken**", host: "bbb15.mydomain.tld", referrer: "https://bbb.mydomain.tld/"
2021/07/28 08:20:20 [error] 56119#56119: *6622 connect() failed (111: Connection refused) while connecting to upstream, client: x.x.x.16, server: bbb15.mydomain.tld, request: "GET /html5client/join?sessionToken=**anytoken** HTTP/1.1", upstream: "http://127.0.0.1:4104/html5client/join?sessionToken=**anytoken**", host: "bbb15.mydomain.tld", referrer: "https://bbb.mydomain.tld/"
referer is greenlight in my setup.
Seems that the backend is offline. But why? which logs should i look in?
BigBlueButton Server 2.3.8
Yes, bbb-html5
service is down.
sudo systemctl status bbb-html5
sudo journalctl -u bbb-html5-*
to see logs.
Likely crashes on each of the bbb-html5 frontend/backend, overtime. Likely memory related.
It would be useful if you could provide some of the journalctl logs, perhaps a problem would be identified. I have seen very few of these on the servers I have access to. Proper monitoring can help to spot this and restart bbb at first opportunity.
We are considering auto-restarting the service in the future, but not there yet. Still, with 2x frontend, there is decent chance of minimal discruption only to users if you are alerted about the service crash and you isolate that servers so no new meetings get started on it. Once empty, obtain logs, restart BBB unless bigger issue (space on disk / ram issue / etc)
bbb-conf --status
nginx —————————————————► [✔ - active]
freeswitch ————————————► [✔ - active]
redis-server ——————————► [✔ - active]
bbb-apps-akka —————————► [✔ - active]
bbb-fsesl-akka ————————► [✔ - active]
mongod ————————————————► [✔ - active]
bbb-html5 —————————————► [✔ - active]
bbb-webrtc-sfu ————————► [✔ - active]
kurento-media-server ——► [✔ - active]
bbb-html5-backend@1 ———► [✘ - failed]
bbb-html5-backend@2 ———► [✘ - failed]
bbb-html5-frontend@1 ——► [✘ - failed]
bbb-html5-frontend@2 ——► [✘ - failed]
etherpad ——————————————► [✔ - active]
bbb-web ———————————————► [✔ - active]
we have a cronjob which should restart bbb at 04:00 every day - it seems that the restart is causing the problems:
Jul 29 03:58:57 bbb19 systemd_start_frontend.sh[1742]: info: Active connections
Jul 29 03:59:26 bbb19 systemd_start_frontend.sh[1751]: info: Active connections
Jul 29 03:59:27 bbb19 systemd_start_frontend.sh[1742]: info: Active connections
Jul 29 03:59:56 bbb19 systemd_start_frontend.sh[1751]: info: Active connections
Jul 29 03:59:57 bbb19 systemd_start_frontend.sh[1742]: info: Active connections
Jul 29 04:00:01 bbb19 systemd[1]: Stopping BigBlueButton HTML5 service, frontend instance 1...
Jul 29 04:00:01 bbb19 systemd[1]: Stopping BigBlueButton HTML5 service, backend instance 1...
Jul 29 04:00:01 bbb19 systemd[1]: Stopping BigBlueButton HTML5 service, frontend instance 2...
Jul 29 04:00:01 bbb19 systemd[1]: Stopping BigBlueButton HTML5 service, backend instance 2...
Jul 29 04:00:01 bbb19 systemd[1]: Stopped BigBlueButton HTML5 service, backend instance 1.
Jul 29 04:00:01 bbb19 systemd[1]: Stopped BigBlueButton HTML5 service, backend instance 2.
Jul 29 04:00:01 bbb19 systemd[1]: Stopped BigBlueButton HTML5 service, frontend instance 1.
Jul 29 04:00:01 bbb19 systemd[1]: Stopped BigBlueButton HTML5 service, frontend instance 2.
Jul 29 04:00:15 bbb19 systemd[1]: Started BigBlueButton HTML5 service, backend instance 1.
Jul 29 04:00:15 bbb19 systemd[1]: Started BigBlueButton HTML5 service, backend instance 2.
Jul 29 04:00:15 bbb19 systemd[1]: Started BigBlueButton HTML5 service, frontend instance 1.
Jul 29 04:00:15 bbb19 systemd[1]: Started BigBlueButton HTML5 service, frontend instance 2.
Jul 29 04:00:17 bbb19 systemd_start.sh[76327]: Starting mongoDB
Jul 29 04:00:17 bbb19 systemd_start.sh[76311]: Starting mongoDB
Jul 29 04:00:17 bbb19 systemd_start_frontend.sh[76341]: Starting mongoDB
Jul 29 04:00:17 bbb19 systemd_start_frontend.sh[76355]: Starting mongoDB
Jul 29 04:00:18 bbb19 systemd_start.sh[76327]: Mongo started
Jul 29 04:00:18 bbb19 systemd_start.sh[76327]: Initializing replicaset
Jul 29 04:00:18 bbb19 systemd_start.sh[76311]: Mongo started
Jul 29 04:00:18 bbb19 systemd_start.sh[76311]: Initializing replicaset
Jul 29 04:00:18 bbb19 systemd_start_frontend.sh[76355]: Mongo started
Jul 29 04:00:18 bbb19 systemd_start_frontend.sh[76355]: Initializing replicaset
Jul 29 04:00:18 bbb19 systemd_start_frontend.sh[76341]: Mongo started
Jul 29 04:00:18 bbb19 systemd_start_frontend.sh[76341]: Initializing replicaset
Jul 29 04:00:18 bbb19 systemd_start.sh[76311]: MongoDB shell version v4.2.15
Jul 29 04:00:18 bbb19 systemd_start_frontend.sh[76341]: MongoDB shell version v4.2.15
Jul 29 04:00:18 bbb19 systemd_start.sh[76327]: MongoDB shell version v4.2.15
Jul 29 04:00:18 bbb19 systemd_start_frontend.sh[76355]: MongoDB shell version v4.2.15
Jul 29 04:00:18 bbb19 systemd_start.sh[76311]: connecting to: mongodb://127.0.1.1:27017/test?compressors=disabled&gssapiServiceName=mongodb
Jul 29 04:00:18 bbb19 systemd_start_frontend.sh[76341]: connecting to: mongodb://127.0.1.1:27017/test?compressors=disabled&gssapiServiceName=mongodb
Jul 29 04:00:18 bbb19 systemd_start.sh[76327]: connecting to: mongodb://127.0.1.1:27017/test?compressors=disabled&gssapiServiceName=mongodb
Jul 29 04:00:18 bbb19 systemd_start_frontend.sh[76355]: connecting to: mongodb://127.0.1.1:27017/test?compressors=disabled&gssapiServiceName=mongodb
Jul 29 04:00:19 bbb19 systemd_start_frontend.sh[76341]: 2021-07-29T04:00:19.329+0000 E QUERY [js] Error: couldn't connect to server 127.0.1.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.1.1:27017 :: caused by :: Connection refused :
Jul 29 04:00:19 bbb19 systemd_start_frontend.sh[76341]: connect@src/mongo/shell/mongo.js:353:17
Jul 29 04:00:19 bbb19 systemd_start_frontend.sh[76341]: @(connect):2:6
Jul 29 04:00:19 bbb19 systemd_start.sh[76311]: 2021-07-29T04:00:19.329+0000 E QUERY [js] Error: couldn't connect to server 127.0.1.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.1.1:27017 :: caused by :: Connection refused :
Jul 29 04:00:19 bbb19 systemd_start.sh[76311]: connect@src/mongo/shell/mongo.js:353:17
Jul 29 04:00:19 bbb19 systemd_start.sh[76311]: @(connect):2:6
Jul 29 04:00:19 bbb19 systemd_start_frontend.sh[76355]: 2021-07-29T04:00:19.329+0000 E QUERY [js] Error: couldn't connect to server 127.0.1.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.1.1:27017 :: caused by :: Connection refused :
Jul 29 04:00:19 bbb19 systemd_start_frontend.sh[76355]: connect@src/mongo/shell/mongo.js:353:17
Jul 29 04:00:19 bbb19 systemd_start_frontend.sh[76355]: @(connect):2:6
Jul 29 04:00:19 bbb19 systemd_start.sh[76327]: 2021-07-29T04:00:19.329+0000 E QUERY [js] Error: couldn't connect to server 127.0.1.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.1.1:27017 :: caused by :: Connection refused :
Jul 29 04:00:19 bbb19 systemd_start.sh[76327]: connect@src/mongo/shell/mongo.js:353:17
Jul 29 04:00:19 bbb19 systemd_start.sh[76327]: @(connect):2:6
Jul 29 04:00:19 bbb19 systemd_start_frontend.sh[76355]: 2021-07-29T04:00:19.417+0000 F - [main] exception: connect failed
Jul 29 04:00:19 bbb19 systemd_start_frontend.sh[76355]: 2021-07-29T04:00:19.417+0000 E - [main] exiting with code 1
Jul 29 04:00:19 bbb19 systemd_start_frontend.sh[76341]: 2021-07-29T04:00:19.417+0000 F - [main] exception: connect failed
Jul 29 04:00:19 bbb19 systemd_start_frontend.sh[76341]: 2021-07-29T04:00:19.417+0000 E - [main] exiting with code 1
Jul 29 04:00:19 bbb19 systemd_start.sh[76327]: 2021-07-29T04:00:19.417+0000 F - [main] exception: connect failed
Jul 29 04:00:19 bbb19 systemd_start.sh[76327]: 2021-07-29T04:00:19.417+0000 E - [main] exiting with code 1
Jul 29 04:00:19 bbb19 systemd_start.sh[76311]: 2021-07-29T04:00:19.417+0000 F - [main] exception: connect failed
Jul 29 04:00:19 bbb19 systemd_start.sh[76311]: 2021-07-29T04:00:19.417+0000 E - [main] exiting with code 1
Jul 29 04:00:19 bbb19 systemd[1]: bbb-html5-backend@1.service: Main process exited, code=exited, status=1/FAILURE
Jul 29 04:00:19 bbb19 systemd[1]: bbb-html5-backend@1.service: Failed with result 'exit-code'.
Jul 29 04:00:19 bbb19 systemd[1]: bbb-html5-backend@2.service: Main process exited, code=exited, status=1/FAILURE
Jul 29 04:00:19 bbb19 systemd[1]: bbb-html5-backend@2.service: Failed with result 'exit-code'.
Jul 29 04:00:19 bbb19 systemd[1]: bbb-html5-frontend@1.service: Main process exited, code=exited, status=1/FAILURE
Jul 29 04:00:19 bbb19 systemd[1]: bbb-html5-frontend@1.service: Failed with result 'exit-code'.
Jul 29 04:00:19 bbb19 systemd[1]: bbb-html5-frontend@2.service: Main process exited, code=exited, status=1/FAILURE
Jul 29 04:00:19 bbb19 systemd[1]: bbb-html5-frontend@2.service: Failed with result 'exit-code'.
seems that mongodb is too slow starting up and therefore the backend and frontend processes went not up again.
The cronjob does a simple bbb-conf --restart
:
- name: Restart BBB every night @4am
cron:
state: present
name: "restart BigBlueButton"
minute: "0"
hour: "4"
user: root
job: "bbb-conf --restart"
Maybe the restart procedure has changed in last versions of BBB?
A temporary workaround would be to disable the cronjob or reboot whole server.
@flyinghuman
What do you see on sudo systemctl status mongod
and sudo journalctl -u mongod | tail -20
? Perhaps there's a reason listed why Mongo could not start up.
We have mongod.service
listed as a requirement for bbb-html5.service
and extra logic in /usr/share/meteor/bundle/
to handle the slow starting Mongo.
Hi,
sudo systemctl status mongod
● mongod.service - High-performance, schema-free document-oriented database
Loaded: loaded (/etc/systemd/system/mongod.service; disabled; vendor preset: enabled)
Active: active (running) since Thu 2021-07-29 06:11:13 UTC; 12h ago
Docs: https://docs.mongodb.org/manual
Process: 84893 ExecStartPre=/usr/share/meteor/bundle/mongod_start_pre.sh (code=exited, status=0/SUCCESS)
Main PID: 84988 (mongod)
CGroup: /system.slice/mongod.service
└─84988 /usr/bin/mongod --config /usr/share/meteor/bundle/mongo-ramdisk.conf --oplogSize 8 --replSet rs0 --noauth
Jul 29 06:11:12 bbb19 systemd[1]: Starting High-performance, schema-free document-oriented database...
Jul 29 06:11:13 bbb19 mongod_start_pre.sh[84893]: id: ‘mongod’: no such user
Jul 29 06:11:13 bbb19 systemd[1]: Started High-performance, schema-free document-oriented database.
06:11 i had restarted bbb with bbb-conf --restart
Jul 29 04:00:01 bbb19 systemd[1]: Stopping High-performance, schema-free document-oriented database...
Jul 29 04:00:08 bbb19 systemd[1]: Stopped High-performance, schema-free document-oriented database.
Jul 29 04:00:13 bbb19 systemd[1]: Starting High-performance, schema-free document-oriented database...
Jul 29 04:00:14 bbb19 mongod_start_pre.sh[76031]: id: ‘mongod’: no such user
Jul 29 04:00:14 bbb19 systemd[1]: Started High-performance, schema-free document-oriented database.
Jul 29 06:11:07 bbb19 systemd[1]: Stopping High-performance, schema-free document-oriented database...
Jul 29 06:11:08 bbb19 systemd[1]: Stopped High-performance, schema-free document-oriented database.
Jul 29 06:11:12 bbb19 systemd[1]: Starting High-performance, schema-free document-oriented database...
Jul 29 06:11:13 bbb19 mongod_start_pre.sh[84893]: id: ‘mongod’: no such user
Jul 29 06:11:13 bbb19 systemd[1]: Started High-performance, schema-free document-oriented database.
mongod.log
2021-07-29T03:17:01.258+0000 I NETWORK [conn54] received client metadata from 127.0.0.1:46460 conn54: { application: { name: "MongoD
B Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.15" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64
", version: "18.04" } }
2021-07-29T03:17:01.262+0000 I NETWORK [conn54] end connection 127.0.0.1:46460 (25 connections now open)
2021-07-29T04:00:01.492+0000 I NETWORK [conn22] end connection 127.0.0.1:37608 (23 connections now open)
2021-07-29T04:00:01.492+0000 I NETWORK [conn24] end connection 127.0.0.1:37612 (22 connections now open)
2021-07-29T04:00:01.492+0000 I NETWORK [conn23] end connection 127.0.0.1:37610 (24 connections now open)
2021-07-29T04:00:01.492+0000 I NETWORK [conn21] end connection 127.0.0.1:37466 (21 connections now open)
2021-07-29T04:00:01.492+0000 I NETWORK [conn18] end connection 127.0.0.1:37354 (20 connections now open)
2021-07-29T04:00:01.578+0000 I NETWORK [conn9] end connection 127.0.0.1:37338 (19 connections now open)
2021-07-29T04:00:01.581+0000 I NETWORK [conn33] end connection 127.0.0.1:43984 (18 connections now open)
2021-07-29T04:00:01.581+0000 I NETWORK [conn17] end connection 127.0.0.1:37352 (16 connections now open)
2021-07-29T04:00:01.581+0000 I NETWORK [conn25] end connection 127.0.0.1:38900 (17 connections now open)
2021-07-29T04:00:01.581+0000 I NETWORK [conn20] end connection 127.0.0.1:37358 (15 connections now open)
2021-07-29T04:00:01.581+0000 I NETWORK [conn10] end connection 127.0.0.1:37336 (14 connections now open)
2021-07-29T04:00:01.582+0000 I NETWORK [conn28] end connection 127.0.0.1:39596 (13 connections now open)
2021-07-29T04:00:01.582+0000 I NETWORK [conn30] end connection 127.0.0.1:39600 (12 connections now open)
2021-07-29T04:00:01.582+0000 I NETWORK [conn12] end connection 127.0.0.1:37340 (11 connections now open)
2021-07-29T04:00:01.582+0000 I NETWORK [conn27] end connection 127.0.0.1:39474 (9 connections now open)
2021-07-29T04:00:01.582+0000 I NETWORK [conn29] end connection 127.0.0.1:39598 (10 connections now open)
2021-07-29T04:00:01.583+0000 I NETWORK [conn31] end connection 127.0.0.1:39912 (8 connections now open)
2021-07-29T04:00:01.583+0000 I NETWORK [conn38] end connection 127.0.0.1:48950 (7 connections now open)
2021-07-29T04:00:01.583+0000 I NETWORK [conn26] end connection 127.0.0.1:38902 (6 connections now open)
2021-07-29T04:00:01.583+0000 I NETWORK [conn11] end connection 127.0.0.1:37342 (5 connections now open)
2021-07-29T04:00:01.583+0000 I NETWORK [conn19] end connection 127.0.0.1:37356 (4 connections now open)
2021-07-29T04:00:02.078+0000 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2021-07-29T04:00:02.131+0000 I NETWORK [conn15] end connection 127.0.0.1:37348 (3 connections now open)
2021-07-29T04:00:02.131+0000 I NETWORK [conn16] end connection 127.0.0.1:37350 (2 connections now open)
2021-07-29T04:00:02.200+0000 I NETWORK [conn14] end connection 127.0.0.1:37346 (1 connection now open)
2021-07-29T04:00:02.225+0000 I NETWORK [conn13] end connection 127.0.0.1:37344 (0 connections now open)
2021-07-29T04:00:02.408+0000 I REPL [signalProcessingThread] Stepping down the ReplicationCoordinator for shutdown, waitTime: 100
00ms
2021-07-29T04:00:02.545+0000 I SHARDING [signalProcessingThread] Shutting down the WaitForMajorityService
2021-07-29T04:00:04.211+0000 I CONTROL [signalProcessingThread] Shutting down the LogicalSessionCache
2021-07-29T04:00:04.291+0000 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...
2021-07-29T04:00:04.308+0000 I NETWORK [listener] removing socket file: /tmp/mongodb-27017.sock
2021-07-29T04:00:04.311+0000 I NETWORK [signalProcessingThread] Shutting down the global connection pool
2021-07-29T04:00:04.324+0000 I STORAGE [signalProcessingThread] Shutting down the FlowControlTicketholder
2021-07-29T04:00:04.324+0000 I - [signalProcessingThread] Stopping further Flow Control ticket acquisitions.
2021-07-29T04:00:04.324+0000 I STORAGE [signalProcessingThread] Shutting down the PeriodicThreadToAbortExpiredTransactions
2021-07-29T04:00:04.382+0000 I STORAGE [signalProcessingThread] Shutting down the PeriodicThreadToDecreaseSnapshotHistoryIfNotNeeded
2021-07-29T04:00:04.382+0000 I REPL [signalProcessingThread] Shutting down the ReplicationCoordinator
2021-07-29T04:00:04.400+0000 I REPL [signalProcessingThread] shutting down replication subsystems
2021-07-29T04:00:04.455+0000 I REPL [signalProcessingThread] Stopping replication reporter thread
2021-07-29T04:00:04.555+0000 I REPL [signalProcessingThread] Stopping replication fetcher thread
2021-07-29T04:00:04.575+0000 I REPL [signalProcessingThread] Stopping replication applier thread
2021-07-29T04:00:04.634+0000 I REPL [rsBackgroundSync] Stopping replication producer
2021-07-29T04:00:05.762+0000 I REPL [rsSync-0] Finished oplog application
2021-07-29T04:00:05.762+0000 I REPL [signalProcessingThread] Stopping replication storage threads
2021-07-29T04:00:05.975+0000 I ASIO [RS] Killing all outstanding egress activity.
2021-07-29T04:00:06.158+0000 I ASIO [Replication] Killing all outstanding egress activity.
2021-07-29T04:00:06.184+0000 I SHARDING [signalProcessingThread] Shutting down the ShardingInitializationMongoD
2021-07-29T04:00:06.196+0000 I REPL [signalProcessingThread] Enqueuing the ReplicationStateTransitionLock for shutdown
2021-07-29T04:00:06.197+0000 I - [signalProcessingThread] Killing all operations for shutdown
2021-07-29T04:00:06.197+0000 I COMMAND [signalProcessingThread] Shutting down all open transactions
2021-07-29T04:00:06.197+0000 I REPL [signalProcessingThread] Acquiring the ReplicationStateTransitionLock for shutdown
2021-07-29T04:00:06.197+0000 I INDEX [signalProcessingThread] Shutting down the IndexBuildsCoordinator
2021-07-29T04:00:06.263+0000 I NETWORK [signalProcessingThread] Shutting down the ReplicaSetMonitor
2021-07-29T04:00:06.328+0000 I REPL [signalProcessingThread] Shutting down the LogicalTimeValidator
2021-07-29T04:00:06.416+0000 I CONTROL [signalProcessingThread] Shutting down free monitoring
2021-07-29T04:00:06.424+0000 I CONTROL [signalProcessingThread] Shutting down free monitoring
2021-07-29T04:00:06.429+0000 I FTDC [signalProcessingThread] Shutting down full-time data capture
2021-07-29T04:00:06.446+0000 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
2021-07-29T04:00:06.471+0000 I STORAGE [signalProcessingThread] Shutting down the HealthLog
2021-07-29T04:00:06.471+0000 I STORAGE [signalProcessingThread] Shutting down the storage engine
2021-07-29T04:00:06.471+0000 I STORAGE [signalProcessingThread] Deregistering all the collections
2021-07-29T04:00:06.681+0000 I STORAGE [WTOplogJournalThread] Oplog journal thread loop shutting down
2021-07-29T04:00:06.854+0000 I STORAGE [signalProcessingThread] Timestamp monitor shutting down
2021-07-29T04:00:06.946+0000 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down
2021-07-29T04:00:06.953+0000 I STORAGE [signalProcessingThread] Shutting down session sweeper thread
2021-07-29T04:00:06.953+0000 I STORAGE [signalProcessingThread] Finished shutting down session sweeper thread
2021-07-29T04:00:06.953+0000 I STORAGE [signalProcessingThread] Shutting down journal flusher thread
2021-07-29T04:00:07.001+0000 I STORAGE [signalProcessingThread] Finished shutting down journal flusher thread
2021-07-29T04:00:07.001+0000 I STORAGE [signalProcessingThread] Shutting down checkpoint thread
2021-07-29T04:00:07.001+0000 I STORAGE [signalProcessingThread] Finished shutting down checkpoint thread
2021-07-29T04:00:08.140+0000 I STORAGE [signalProcessingThread] shutdown: removing fs lock...
2021-07-29T04:00:08.140+0000 I - [signalProcessingThread] Dropping the scope cache for shutdown
2021-07-29T04:00:08.148+0000 I CONTROL [signalProcessingThread] now exiting
2021-07-29T04:00:08.148+0000 I CONTROL [signalProcessingThread] shutting down with code:0
2021-07-29T04:00:18.116+0000 I CONTROL [main] ***** SERVER RESTARTED *****
2021-07-29T04:00:18.249+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2021-07-29T04:00:18.260+0000 W ASIO [main] No TransportLayer configured during NetworkInterface startup
2021-07-29T04:00:18.260+0000 I CONTROL [initandlisten] MongoDB starting : pid=76198 port=27017 dbpath=/mnt/mongo-ramdisk 64-bit host=bbb19
2021-07-29T04:00:18.260+0000 I CONTROL [initandlisten] db version v4.2.15
2021-07-29T04:00:18.260+0000 I CONTROL [initandlisten] git version: d7fd78dead621a539c20791a93abec34bb1be385
2021-07-29T04:00:18.260+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1 11 Sep 2018
2021-07-29T04:00:18.260+0000 I CONTROL [initandlisten] allocator: tcmalloc
2021-07-29T04:00:18.260+0000 I CONTROL [initandlisten] modules: none
2021-07-29T04:00:18.260+0000 I CONTROL [initandlisten] build environment:
2021-07-29T04:00:18.260+0000 I CONTROL [initandlisten] distmod: ubuntu1804
2021-07-29T04:00:18.260+0000 I CONTROL [initandlisten] distarch: x86_64
2021-07-29T04:00:18.260+0000 I CONTROL [initandlisten] target_arch: x86_64
2021-07-29T04:00:18.260+0000 I CONTROL [initandlisten] options: { config: "/usr/share/meteor/bundle/mongo-ramdisk.conf", net: { bindIp: "127.0.1.1", port: 27017 }, replication: { oplogSizeMB: 8, replSet: "rs0" }, security: { authorization: "disabled" }, setParameter: { diagnosticDataCollectionEnabled: "false" }, storage: { dbPath: "/mnt/mongo-ramdisk", journal: { enabled: true }, wiredTiger: { collectionConfig: { blockCompressor: "none" }, engineConfig: { cacheSizeGB: 1.0, directoryForIndexes: true, journalCompressor: "none" }, indexConfig: { prefixCompression: false } } }, systemLog: { destination: "file", logAppend: true, path: "/var/log/mongodb/mongod.log" } }
2021-07-29T04:00:18.261+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=1024M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=none),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],
2021-07-29T04:00:18.704+0000 I STORAGE [initandlisten] WiredTiger message [1627531218:704892][76198:0x7f301202fb00], txn-recover: Set global recovery timestamp: (0, 0)
2021-07-29T04:00:18.707+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
2021-07-29T04:00:18.866+0000 I STORAGE [initandlisten] No table logging settings modifications are required for existing WiredTiger tables. Logging enabled? 0
2021-07-29T04:00:18.867+0000 I STORAGE [initandlisten] Timestamp monitor starting
2021-07-29T04:00:19.071+0000 I SHARDING [initandlisten] Marking collection local.system.replset as collection version: <unsharded>
2021-07-29T04:00:19.071+0000 I STORAGE [initandlisten] Flow Control is enabled on this deployment.
2021-07-29T04:00:19.071+0000 I SHARDING [initandlisten] Marking collection admin.system.roles as collection version: <unsharded>
2021-07-29T04:00:19.071+0000 I SHARDING [initandlisten] Marking collection admin.system.version as collection version: <unsharded>
2021-07-29T04:00:19.072+0000 I STORAGE [initandlisten] createCollection: local.startup_log with generated UUID: 714f8df9-5815-43f7-8861-b035d0e95c7d and options: { capped: true, size: 10485760 }
2021-07-29T04:00:19.116+0000 I INDEX [initandlisten] index build: done building index _id_ on ns local.startup_log
2021-07-29T04:00:19.116+0000 I SHARDING [initandlisten] Marking collection local.startup_log as collection version: <unsharded>
2021-07-29T04:00:19.116+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/mnt/mongo-ramdisk/diagnostic.data'
2021-07-29T04:00:19.122+0000 I STORAGE [initandlisten] createCollection: local.replset.oplogTruncateAfterPoint with generated UUID: 3898435a-2838-4462-81d7-2947f856b0f4 and options: {}
2021-07-29T04:00:19.125+0000 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.oplogTruncateAfterPoint
2021-07-29T04:00:19.125+0000 I STORAGE [initandlisten] createCollection: local.replset.minvalid with generated UUID: 73c71207-3ac3-45de-aae8-e07771b62b58 and options: {}
2021-07-29T04:00:19.127+0000 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.minvalid
2021-07-29T04:00:19.127+0000 I SHARDING [initandlisten] Marking collection local.replset.minvalid as collection version: <unsharded>
2021-07-29T04:00:19.127+0000 I STORAGE [initandlisten] createCollection: local.replset.election with generated UUID: 9262ba05-973a-48ae-92a4-a041cbd0d78e and options: {}
2021-07-29T04:00:19.129+0000 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.election
2021-07-29T04:00:19.129+0000 I SHARDING [initandlisten] Marking collection local.replset.election as collection version: <unsharded>
2021-07-29T04:00:19.129+0000 I REPL [initandlisten] Did not find local initialized voted for document at startup.
2021-07-29T04:00:19.129+0000 I REPL [initandlisten] Did not find local Rollback ID document at startup. Creating one.
2021-07-29T04:00:19.129+0000 I STORAGE [initandlisten] createCollection: local.system.rollback.id with generated UUID: b1ae81f2-4731-416f-af76-461af85f7a55 and options: {}
2021-07-29T04:00:19.131+0000 I INDEX [initandlisten] index build: done building index _id_ on ns local.system.rollback.id
2021-07-29T04:00:19.131+0000 I SHARDING [initandlisten] Marking collection local.system.rollback.id as collection version: <unsharded>
2021-07-29T04:00:19.131+0000 I REPL [initandlisten] Initialized the rollback ID to 1
2021-07-29T04:00:19.131+0000 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset
2021-07-29T04:00:19.132+0000 I SHARDING [LogicalSessionCacheReap] Marking collection config.system.sessions as collection version: <unsharded>
2021-07-29T04:00:19.132+0000 I NETWORK [listener] Listening on /tmp/mongodb-27017.sock
2021-07-29T04:00:19.132+0000 I NETWORK [listener] Listening on 127.0.1.1
2021-07-29T04:00:19.132+0000 I NETWORK [listener] waiting for connections on port 27017
2021-07-29T04:00:19.222+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2021-07-29T04:00:19.269+0000 I COMMAND [LogicalSessionCacheReap] command config.system.sessions command: listIndexes { listIndexes: "system.sessions", cursor: {}, $db: "config" } numYields:0 ok:0 errMsg:"ns does not exist: config.system.sessions" errName:NamespaceNotFound errCode:26 reslen:134 locks:{ ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_msg 137ms
2021-07-29T04:00:19.269+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
2021-07-29T04:05:19.132+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2021-07-29T04:05:19.132+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
2021-07-29T04:10:19.132+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2021-07-29T04:10:19.132+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
2021-07-29T04:15:19.132+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2021-07-29T04:15:19.132+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
2021-07-29T04:17:01.338+0000 I NETWORK [listener] connection accepted from 127.0.0.1:51896 #1 (1 connection now open)
2021-07-29T04:17:01.338+0000 I NETWORK [conn1] received client metadata from 127.0.0.1:51896 conn1: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.15" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "18.04" } }
2021-07-29T04:17:01.345+0000 I NETWORK [conn1] end connection 127.0.0.1:51896 (0 connections now open)
2021-07-29T04:20:19.132+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2021-07-29T04:20:19.132+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
2021-07-29T04:25:19.132+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2021-07-29T04:25:19.132+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
2021-07-29T04:30:19.132+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2021-07-29T04:30:19.132+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
2021-07-29T04:35:19.132+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2021-07-29T04:35:19.132+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
2021-07-29T04:40:19.132+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2021-07-29T04:40:19.132+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
2021-07-29T04:45:19.132+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2021-07-29T04:45:19.132+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
2021-07-29T04:50:19.132+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2021-07-29T04:50:19.132+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
2021-07-29T04:55:19.132+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2021-07-29T04:55:19.132+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
2021-07-29T05:00:19.132+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2021-07-29T05:00:19.132+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
2021-07-29T05:05:19.132+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2021-07-29T05:05:19.132+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
2021-07-29T05:10:19.132+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2021-07-29T05:10:19.132+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
2021-07-29T05:15:19.132+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2021-07-29T05:15:19.132+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
2021-07-29T05:17:01.414+0000 I NETWORK [listener] connection accepted from 127.0.0.1:57210 #2 (1 connection now open)
2021-07-29T05:17:01.414+0000 I NETWORK [conn2] received client metadata from 127.0.0.1:57210 conn2: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.15" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "18.04" } }
2021-07-29T05:17:01.418+0000 I NETWORK [conn2] end connection 127.0.0.1:57210 (0 connections now open)
2021-07-29T05:20:19.132+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2021-07-29T05:20:19.132+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
2021-07-29T05:35:19.132+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions r
efresh interval: Replication has not yet been configured
2021-07-29T05:35:19.132+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap
interval: config.system.sessions does not exist
2021-07-29T05:40:19.132+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions r
efresh interval: Replication has not yet been configured
2021-07-29T05:40:19.132+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap
interval: config.system.sessions does not exist
2021-07-29T05:45:19.132+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions r
efresh interval: Replication has not yet been configured
2021-07-29T05:45:19.132+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
2021-07-29T05:50:19.132+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2021-07-29T05:50:19.132+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
2021-07-29T05:55:19.132+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2021-07-29T05:55:19.132+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
2021-07-29T06:00:19.132+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2021-07-29T06:00:19.132+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
2021-07-29T06:05:19.132+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2021-07-29T06:05:19.132+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
2021-07-29T06:10:19.132+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2021-07-29T06:10:19.132+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
2021-07-29T06:11:07.983+0000 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2021-07-29T06:11:07.983+0000 I REPL [signalProcessingThread] Stepping down the ReplicationCoordinator for shutdown, waitTime: 100
00ms
2021-07-29T06:11:07.992+0000 I SHARDING [signalProcessingThread] Shutting down the WaitForMajorityService
2021-07-29T06:11:07.992+0000 I CONTROL [signalProcessingThread] Shutting down the LogicalSessionCache
2021-07-29T06:11:07.992+0000 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...
2021-07-29T06:11:07.993+0000 I NETWORK [listener] removing socket file: /tmp/mongodb-27017.sock
2021-07-29T06:11:07.993+0000 I NETWORK [signalProcessingThread] Shutting down the global connection pool
2021-07-29T06:11:07.993+0000 I STORAGE [signalProcessingThread] Shutting down the FlowControlTicketholder
2021-07-29T06:11:07.993+0000 I - [signalProcessingThread] Stopping further Flow Control ticket acquisitions.
2021-07-29T06:11:07.993+0000 I STORAGE [signalProcessingThread] Shutting down the PeriodicThreadToAbortExpiredTransactions
2021-07-29T06:11:07.993+0000 I STORAGE [signalProcessingThread] Shutting down the PeriodicThreadToDecreaseSnapshotHistoryIfNotNeeded
2021-07-29T06:11:07.993+0000 I REPL [signalProcessingThread] Shutting down the ReplicationCoordinator
2021-07-29T06:11:07.993+0000 I REPL [signalProcessingThread] shutting down replication subsystems
2021-07-29T06:11:07.994+0000 I ASIO [Replication] Killing all outstanding egress activity.
2021-07-29T06:11:07.994+0000 I SHARDING [signalProcessingThread] Shutting down the ShardingInitializationMongoD
2021-07-29T06:11:07.994+0000 I REPL [signalProcessingThread] Enqueuing the ReplicationStateTransitionLock for shutdown
2021-07-29T06:11:07.994+0000 I - [signalProcessingThread] Killing all operations for shutdown
2021-07-29T06:11:07.994+0000 I COMMAND [signalProcessingThread] Shutting down all open transactions
2021-07-29T06:11:07.994+0000 I REPL [signalProcessingThread] Acquiring the ReplicationStateTransitionLock for shutdown
2021-07-29T06:11:07.994+0000 I INDEX [signalProcessingThread] Shutting down the IndexBuildsCoordinator
2021-07-29T06:11:07.994+0000 I NETWORK [signalProcessingThread] Shutting down the ReplicaSetMonitor
2021-07-29T06:11:07.994+0000 I REPL [signalProcessingThread] Shutting down the LogicalTimeValidator
2021-07-29T06:11:07.995+0000 I CONTROL [signalProcessingThread] Shutting down free monitoring
2021-07-29T06:11:07.995+0000 I CONTROL [signalProcessingThread] Shutting down free monitoring
2021-07-29T06:11:07.995+0000 I FTDC [signalProcessingThread] Shutting down full-time data capture
2021-07-29T06:11:07.995+0000 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
2021-07-29T06:11:07.996+0000 I STORAGE [signalProcessingThread] Shutting down the HealthLog
2021-07-29T06:11:07.996+0000 I STORAGE [signalProcessingThread] Shutting down the storage engine
2021-07-29T06:11:07.996+0000 I STORAGE [signalProcessingThread] Deregistering all the collections
2021-07-29T06:11:07.996+0000 I STORAGE [signalProcessingThread] Timestamp monitor shutting down
2021-07-29T06:11:07.996+0000 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down
2021-07-29T06:11:07.996+0000 I STORAGE [signalProcessingThread] Shutting down session sweeper thread
2021-07-29T06:11:07.996+0000 I STORAGE [signalProcessingThread] Finished shutting down session sweeper thread
2021-07-29T06:11:07.996+0000 I STORAGE [signalProcessingThread] Shutting down journal flusher thread
2021-07-29T06:11:08.041+0000 I STORAGE [signalProcessingThread] Finished shutting down journal flusher thread
2021-07-29T06:11:08.041+0000 I STORAGE [signalProcessingThread] Shutting down checkpoint thread
2021-07-29T06:11:08.041+0000 I STORAGE [signalProcessingThread] Finished shutting down checkpoint thread
2021-07-29T06:11:08.042+0000 I STORAGE [signalProcessingThread] shutdown: removing fs lock...
2021-07-29T06:11:08.042+0000 I - [signalProcessingThread] Dropping the scope cache for shutdown
2021-07-29T06:11:08.042+0000 I CONTROL [signalProcessingThread] now exiting
2021-07-29T06:11:08.042+0000 I CONTROL [signalProcessingThread] shutting down with code:0
2021-07-29T06:11:13.033+0000 I CONTROL [main] ***** SERVER RESTARTED *****
after restart at 6:11:
2021-07-29T06:11:13.241+0000 I NETWORK [listener] Listening on /tmp/mongodb-27017.sock
2021-07-29T06:11:13.241+0000 I NETWORK [listener] Listening on 127.0.1.1
2021-07-29T06:11:13.241+0000 I NETWORK [listener] waiting for connections on port 27017
2021-07-29T06:11:16.062+0000 I NETWORK [listener] connection accepted from 127.0.0.1:35102 #1 (1 connection now open)
2021-07-29T06:11:16.062+0000 I NETWORK [listener] connection accepted from 127.0.0.1:35104 #2 (2 connections now open)
2021-07-29T06:11:16.062+0000 I NETWORK [listener] connection accepted from 127.0.0.1:35106 #3 (3 connections now open)
2021-07-29T06:11:16.062+0000 I NETWORK [conn2] received client metadata from 127.0.0.1:35104 conn2: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.15" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "18.04" } }
2021-07-29T06:11:16.062+0000 I NETWORK [conn3] received client metadata from 127.0.0.1:35106 conn3: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.15" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "18.04" } }
2021-07-29T06:11:16.064+0000 I NETWORK [listener] connection accepted from 127.0.0.1:35108 #4 (4 connections now open)
2021-07-29T06:11:16.064+0000 I NETWORK [conn1] received client metadata from 127.0.0.1:35102 conn1: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.15" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "18.04" } }
2021-07-29T06:11:16.065+0000 I NETWORK [conn4] received client metadata from 127.0.0.1:35108 conn4: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.15" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "18.04" } }
2021-07-29T06:11:16.065+0000 I REPL [conn2] replSetInitiate admin command received from client
2021-07-29T06:11:16.067+0000 I REPL [conn2] replSetInitiate config object with 1 members parses ok
2021-07-29T06:11:16.067+0000 I REPL [conn4] replSetInitiate admin command received from client
2021-07-29T06:11:16.068+0000 I SHARDING [conn2] Marking collection local.oplog.rs as collection version: <unsharded>
2021-07-29T06:11:16.068+0000 I REPL [conn2] ******
2021-07-29T06:11:16.068+0000 I REPL [conn2] creating replication oplog of size: 8MB...
2021-07-29T06:11:16.068+0000 I STORAGE [conn2] createCollection: local.oplog.rs with generated UUID: c86cdaef-7e3a-4f88-92f6-4603073cdab7 and options: { capped: true, size: 8388608, autoIndexId: false }
2021-07-29T06:11:16.068+0000 I REPL [conn1] replSetInitiate admin command received from client
2021-07-29T06:11:16.069+0000 I NETWORK [conn4] end connection 127.0.0.1:35108 (3 connections now open)
2021-07-29T06:11:16.069+0000 I NETWORK [conn1] end connection 127.0.0.1:35102 (2 connections now open)
2021-07-29T06:11:16.070+0000 I STORAGE [conn2] Starting OplogTruncaterThread local.oplog.rs
2021-07-29T06:11:16.071+0000 I STORAGE [conn2] The size storer reports that the oplog contains 0 records totaling to 0 bytes
2021-07-29T06:11:16.071+0000 I STORAGE [conn2] Scanning the oplog to determine where to place markers for truncation
2021-07-29T06:11:16.071+0000 I STORAGE [conn2] WiredTiger record store oplog processing took 0ms
2021-07-29T06:11:16.072+0000 I REPL [conn3] replSetInitiate admin command received from client
2021-07-29T06:11:16.075+0000 I REPL [conn2] ******
2021-07-29T06:11:16.079+0000 I NETWORK [conn3] end connection 127.0.0.1:35106 (1 connection now open)
2021-07-29T06:11:16.089+0000 I STORAGE [conn2] createCollection: local.system.replset with generated UUID: 8ec2df01-e773-48f8-810e-ce0da57520dc and options: {}
2021-07-29T06:11:16.092+0000 I INDEX [conn2] index build: done building index _id_ on ns local.system.replset
2021-07-29T06:11:16.116+0000 I NETWORK [listener] connection accepted from 127.0.0.1:35110 #5 (2 connections now open)
which process must i monitor to take notice of the not running html5 frontend and backend?
# ps -ef | grep meteor
mongodb 29832 1 0 Jul23 ? 00:54:06 /usr/bin/mongod --config /usr/share/meteor/bundle/mongo-ramdisk.conf --oplogSize 8 --replSet rs0 --noauth
meteor 30728 30014 0 Jul23 ? 00:03:33 /usr/share/node-v12.16.1-linux-x64/bin/node --max-old-space-size=2048 --max_semi_space_size=128 main.js NODEJS_BACKEND_INSTANCE_ID=1
meteor 30729 30167 0 Jul23 ? 00:04:13 /usr/share/node-v12.16.1-linux-x64/bin/node --max-old-space-size=2048 --max_semi_space_size=128 main.js
meteor 30738 30095 0 Jul23 ? 00:04:02 /usr/share/node-v12.16.1-linux-x64/bin/node --max-old-space-size=2048 --max_semi_space_size=128 main.js
meteor 30757 30136 0 Jul23 ? 00:04:10 /usr/share/node-v12.16.1-linux-x64/bin/node --max-old-space-size=2048 --max_semi_space_size=128 main.js
meteor 30758 30126 0 Jul23 ? 00:04:12 /usr/share/node-v12.16.1-linux-x64/bin/node --max-old-space-size=2048 --max_semi_space_size=128 main.js
meteor 30775 30070 0 Jul23 ? 00:03:39 /usr/share/node-v12.16.1-linux-x64/bin/node --max-old-space-size=2048 --max_semi_space_size=128 main.js NODEJS_BACKEND_INSTANCE_ID=2
This is a list of the 4 frontends and 2 backends (all bbb-html5) on one of the demo servers.
ps -ef | grep meteor
thanks, at least i will now get informed that the service is not running and restart if neccessary. Anyway; this behavior is not normal. And of course; scalelite should check if the server is really ready or give a 404 back so that these servers are automatically going offline in scalelite and back online if all is okay again.
https://github.com/bigbluebutton/bigbluebutton/pull/13527 should help with this. Also the update to Node 14 (in BBB 2.4) should help with stability. Monitoring of the servers is still a good idea. Please reopen if you are able to crash bbb-html5 with reproducible steps
html5client says 404 not found when join a meeting with my bigbluebutton version 2.3.4 (Running on ubuntu 18.04)
When i check with bbb-conf --status it shown below : nginx —————————————————► [✔ - active] freeswitch ————————————► [✔ - active] redis-server ——————————► [✔ - active] bbb-apps-akka —————————► [✔ - active] bbb-fsesl-akka ————————► [✔ - active] mongod ————————————————► [✔ - active] bbb-html5 —————————————► [✔ - active] bbb-webrtc-sfu ————————► [✔ - active] kurento-media-server ——► [✔ - active] bbb-html5-backend@1 ———► [✘ - failed] bbb-html5-backend@2 ———► [✘ - failed] bbb-html5-frontend@1 ——► [✘ - failed] bbb-html5-frontend@2 ——► [✘ - failed] etherpad ——————————————► [✔ - active] bbb-web ———————————————► [✔ - active]
And the url when i join meeting is like this : xxxxxx.com/html5client/join?sessionToken=nlkefqvhooci2umr