litmuschaos / litmus-helm

Helm Charts for the Litmus Chaos Operator & CRDs
Apache License 2.0
46 stars 86 forks source link

[litmus] Fresh install of litmus and mongo does not ready #355

Closed cyron closed 6 months ago

cyron commented 10 months ago

I need some advice or help on litmus installation.

Fresh install of litmus and mongo does not ready.

# kubectl -n litmus get pod
NAME                                       READY   STATUS     RESTARTS      AGE
chaos-litmus-auth-server-fb97d48bb-sm95r   0/1     Init:0/1   0             28m
chaos-litmus-frontend-c567fdb6b-pn626      1/1     Running    0             28m
chaos-litmus-server-864f74c779-2wljc       0/1     Init:0/1   0             28m
chaos-mongodb-0                            0/1     Running    1 (13m ago)   28m
chaos-mongodb-arbiter-0                    1/1     Running    0             28m

and mongo logs many error.

# kubectl -n litmus logs chaos-mongodb-0
Defaulted container "mongodb" out of: mongodb, volume-permissions (init)
mongodb 08:07:15.57 INFO  ==> Advertised Hostname: chaos-mongodb-0.chaos-mongodb-headless.litmus.svc.cluster.local
mongodb 08:07:15.57 INFO  ==> Advertised Port: 27017
realpath: /bitnami/mongodb/data/db: No such file or directory
mongodb 08:07:15.57 INFO  ==> Data dir empty, checking if the replica set already exists
MongoNetworkError: connect ECONNREFUSED 10.244.98.247:27017
mongodb 08:07:16.43 INFO  ==> Pod name matches initial primary pod name, configuring node as a primary
mongodb 08:07:16.45 
mongodb 08:07:16.45 Welcome to the Bitnami mongodb container
mongodb 08:07:16.45 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb
mongodb 08:07:16.45 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues
mongodb 08:07:16.45 
mongodb 08:07:16.45 INFO  ==> ** Starting MongoDB setup **
mongodb 08:07:16.47 INFO  ==> Validating settings in MONGODB_* env vars...
mongodb 08:07:16.51 INFO  ==> Initializing MongoDB...
mongodb 08:07:16.53 INFO  ==> Deploying MongoDB from scratch...
MongoNetworkError: connect ECONNREFUSED 10.244.98.247:27017
mongodb 08:07:18.13 INFO  ==> Creating users...
mongodb 08:07:18.13 INFO  ==> Creating root user...
Current Mongosh Log ID: 65702bb6f41eb31469d26111
Connecting to:          mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.4.2
Using MongoDB:          5.0.8
Using Mongosh:          1.4.2

For mongosh info see: https://docs.mongodb.com/mongodb-shell/

To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy).
You can opt-out by running the disableTelemetry() command.

------
   The server generated these startup warnings when booting:
   2023-12-06T08:07:17.278+00:00: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'
------

test> { ok: 1 }
mongodb 08:16:01.32 INFO  ==> Users created
mongodb 08:16:01.32 INFO  ==> Writing keyfile for replica set authentication...
mongodb 08:16:01.38 INFO  ==> Configuring MongoDB replica set...
mongodb 08:16:01.38 INFO  ==> Stopping MongoDB...
[root@k8s-ceph-litmus-cp1 ~]# k -n litmus logs chaos-mongodb-0
Defaulted container "mongodb" out of: mongodb, volume-permissions (init)
mongodb 08:07:15.57 INFO  ==> Advertised Hostname: chaos-mongodb-0.chaos-mongodb-headless.litmus.svc.cluster.local
mongodb 08:07:15.57 INFO  ==> Advertised Port: 27017
realpath: /bitnami/mongodb/data/db: No such file or directory
mongodb 08:07:15.57 INFO  ==> Data dir empty, checking if the replica set already exists
MongoNetworkError: connect ECONNREFUSED 10.244.98.247:27017
mongodb 08:07:16.43 INFO  ==> Pod name matches initial primary pod name, configuring node as a primary
[root@k8s-ceph-litmus-cp1 ~]# k -n litmus logs chaos-mongodb-0
Defaulted container "mongodb" out of: mongodb, volume-permissions (init)
mongodb 08:21:25.13 INFO  ==> Advertised Hostname: chaos-mongodb-0.chaos-mongodb-headless.litmus.svc.cluster.local
mongodb 08:21:25.13 INFO  ==> Advertised Port: 27017
mongodb 08:21:25.15 INFO  ==> Pod name matches initial primary pod name, configuring node as a primary
mongodb 08:21:25.16 
mongodb 08:21:25.16 Welcome to the Bitnami mongodb container
mongodb 08:21:25.16 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb
mongodb 08:21:25.16 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues
mongodb 08:21:25.16 
mongodb 08:21:25.16 INFO  ==> ** Starting MongoDB setup **
mongodb 08:21:25.18 INFO  ==> Validating settings in MONGODB_* env vars...
mongodb 08:21:25.21 INFO  ==> Initializing MongoDB...
mongodb 08:21:25.26 INFO  ==> Enabling authentication...
mongodb 08:21:25.27 INFO  ==> Deploying MongoDB with persisted data...
mongodb 08:21:25.27 INFO  ==> Writing keyfile for replica set authentication...
mongodb 08:21:25.30 INFO  ==> ** MongoDB setup finished! **

mongodb 08:21:25.31 INFO  ==> ** Starting MongoDB **

{"t":{"$date":"2023-12-06T08:21:25.334+00:00"},"s":"I",  "c":"CONTROL",  "id":20698,   "ctx":"-","msg":"***** SERVER RESTARTED *****"}
{"t":{"$date":"2023-12-06T08:21:25.336+00:00"},"s":"I",  "c":"CONTROL",  "id":23285,   "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2023-12-06T08:21:25.336+00:00"},"s":"I",  "c":"NETWORK",  "id":4915701, "ctx":"main","msg":"Initialized wire specification","attr":{"spec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true}}}
{"t":{"$date":"2023-12-06T08:21:25.336+00:00"},"s":"W",  "c":"ASIO",     "id":22601,   "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2023-12-06T08:21:25.337+00:00"},"s":"I",  "c":"NETWORK",  "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2023-12-06T08:21:25.360+00:00"},"s":"W",  "c":"ASIO",     "id":22601,   "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2023-12-06T08:21:25.360+00:00"},"s":"W",  "c":"ASIO",     "id":22601,   "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2023-12-06T08:21:25.360+00:00"},"s":"I",  "c":"REPL",     "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationDonorService","ns":"config.tenantMigrationDonors"}}
{"t":{"$date":"2023-12-06T08:21:25.360+00:00"},"s":"I",  "c":"REPL",     "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationRecipientService","ns":"config.tenantMigrationRecipients"}}
{"t":{"$date":"2023-12-06T08:21:25.360+00:00"},"s":"I",  "c":"CONTROL",  "id":5945603, "ctx":"main","msg":"Multi threading initialized"}
{"t":{"$date":"2023-12-06T08:21:25.360+00:00"},"s":"I",  "c":"CONTROL",  "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/bitnami/mongodb/data/db","architecture":"64-bit","host":"chaos-mongodb-0"}}
{"t":{"$date":"2023-12-06T08:21:25.360+00:00"},"s":"I",  "c":"CONTROL",  "id":23403,   "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"5.0.8","gitVersion":"c87e1c23421bf79614baf500fda6622bd90f674e","openSSLVersion":"OpenSSL 1.1.1n  15 Mar 2022","modules":[],"allocator":"tcmalloc","environment":{"distmod":"debian10","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2023-12-06T08:21:25.360+00:00"},"s":"I",  "c":"CONTROL",  "id":51765,   "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"PRETTY_NAME=\"Debian GNU/Linux 10 (buster)\"","version":"Kernel 5.14.0-284.30.1.el9_2.x86_64"}}}
{"t":{"$date":"2023-12-06T08:21:25.360+00:00"},"s":"I",  "c":"CONTROL",  "id":21951,   "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"config":"/opt/bitnami/mongodb/conf/mongodb.conf","net":{"bindIp":"*","ipv6":false,"port":27017,"unixDomainSocket":{"enabled":true,"pathPrefix":"/opt/bitnami/mongodb/tmp"}},"processManagement":{"fork":false,"pidFilePath":"/opt/bitnami/mongodb/tmp/mongodb.pid"},"replication":{"enableMajorityReadConcern":true,"replSetName":"rs0"},"security":{"authorization":"enabled","keyFile":"/opt/bitnami/mongodb/conf/keyfile"},"setParameter":{"enableLocalhostAuthBypass":"false"},"storage":{"dbPath":"/bitnami/mongodb/data/db","directoryPerDB":false,"journal":{"enabled":true}},"systemLog":{"destination":"file","logAppend":true,"logRotate":"reopen","path":"/opt/bitnami/mongodb/logs/mongodb.log","quiet":false,"verbosity":0}}}}
{"t":{"$date":"2023-12-06T08:21:25.361+00:00"},"s":"W",  "c":"STORAGE",  "id":22271,   "ctx":"initandlisten","msg":"Detected unclean shutdown - Lock file is not empty","attr":{"lockFile":"/bitnami/mongodb/data/db/mongod.lock"}}
{"t":{"$date":"2023-12-06T08:21:25.361+00:00"},"s":"I",  "c":"STORAGE",  "id":22270,   "ctx":"initandlisten","msg":"Storage engine to use detected by data files","attr":{"dbpath":"/bitnami/mongodb/data/db","storageEngine":"wiredTiger"}}
{"t":{"$date":"2023-12-06T08:21:25.361+00:00"},"s":"W",  "c":"STORAGE",  "id":22302,   "ctx":"initandlisten","msg":"Recovering data from the last clean checkpoint."}
{"t":{"$date":"2023-12-06T08:21:25.361+00:00"},"s":"I",  "c":"STORAGE",  "id":22315,   "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=1439M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}}
{"t":{"$date":"2023-12-06T08:21:26.009+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1701850886:9220][1:0x7fca6ccef100], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 2 through 3"}}
{"t":{"$date":"2023-12-06T08:21:26.050+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1701850886:50686][1:0x7fca6ccef100], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 3 through 3"}}
{"t":{"$date":"2023-12-06T08:21:26.112+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1701850886:112615][1:0x7fca6ccef100], txn-recover: [WT_VERB_RECOVERY_ALL] Main recovery loop: starting at 2/39424 to 3/256"}}
{"t":{"$date":"2023-12-06T08:21:26.112+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1701850886:112909][1:0x7fca6ccef100], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 2 through 3"}}
{"t":{"$date":"2023-12-06T08:21:26.159+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1701850886:159110][1:0x7fca6ccef100], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 3 through 3"}}
{"t":{"$date":"2023-12-06T08:21:26.200+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1701850886:200286][1:0x7fca6ccef100], txn-recover: [WT_VERB_RECOVERY_ALL] Set global recovery timestamp: (0, 0)"}}
{"t":{"$date":"2023-12-06T08:21:26.200+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1701850886:200336][1:0x7fca6ccef100], txn-recover: [WT_VERB_RECOVERY_ALL] Set global oldest timestamp: (0, 0)"}}
{"t":{"$date":"2023-12-06T08:21:26.201+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1701850886:201417][1:0x7fca6ccef100], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 3, snapshot max: 3 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 54"}}
{"t":{"$date":"2023-12-06T08:21:26.203+00:00"},"s":"I",  "c":"STORAGE",  "id":4795906, "ctx":"initandlisten","msg":"WiredTiger opened","attr":{"durationMillis":842}}
{"t":{"$date":"2023-12-06T08:21:26.203+00:00"},"s":"I",  "c":"RECOVERY", "id":23987,   "ctx":"initandlisten","msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}}}
{"t":{"$date":"2023-12-06T08:21:26.203+00:00"},"s":"I",  "c":"STORAGE",  "id":4366408, "ctx":"initandlisten","msg":"No table logging settings modifications are required for existing WiredTiger tables","attr":{"loggingEnabled":false}}
{"t":{"$date":"2023-12-06T08:21:26.205+00:00"},"s":"I",  "c":"STORAGE",  "id":22262,   "ctx":"initandlisten","msg":"Timestamp monitor starting"}
{"t":{"$date":"2023-12-06T08:21:26.205+00:00"},"s":"W",  "c":"CONTROL",  "id":22178,   "ctx":"initandlisten","msg":"/sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'","tags":["startupWarnings"]}
{"t":{"$date":"2023-12-06T08:21:26.238+00:00"},"s":"I",  "c":"NETWORK",  "id":4915702, "ctx":"initandlisten","msg":"Updated wire specification","attr":{"oldSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true},"newSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":13,"maxWireVersion":13},"outgoing":{"minWireVersion":13,"maxWireVersion":13},"isInternalClient":true}}}
{"t":{"$date":"2023-12-06T08:21:26.238+00:00"},"s":"I",  "c":"STORAGE",  "id":5071100, "ctx":"initandlisten","msg":"Clearing temp directory"}
{"t":{"$date":"2023-12-06T08:21:26.239+00:00"},"s":"I",  "c":"CONTROL",  "id":20536,   "ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"}
{"t":{"$date":"2023-12-06T08:21:26.240+00:00"},"s":"I",  "c":"SHARDING", "id":20997,   "ctx":"initandlisten","msg":"Refreshed RWC defaults","attr":{"newDefaults":{}}}
{"t":{"$date":"2023-12-06T08:21:26.240+00:00"},"s":"I",  "c":"FTDC",     "id":20625,   "ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"/bitnami/mongodb/data/db/diagnostic.data"}}
{"t":{"$date":"2023-12-06T08:21:26.242+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":200}}
{"t":{"$date":"2023-12-06T08:21:26.242+00:00"},"s":"I",  "c":"REPL",     "id":6015317, "ctx":"initandlisten","msg":"Setting new configuration state","attr":{"newState":"ConfigStartingUp","oldState":"ConfigPreStart"}}
{"t":{"$date":"2023-12-06T08:21:26.243+00:00"},"s":"I",  "c":"REPL",     "id":4280500, "ctx":"initandlisten","msg":"Attempting to create internal replication collections"}
{"t":{"$date":"2023-12-06T08:21:26.244+00:00"},"s":"I",  "c":"REPL",     "id":4280501, "ctx":"initandlisten","msg":"Attempting to load local voted for document"}
{"t":{"$date":"2023-12-06T08:21:26.244+00:00"},"s":"I",  "c":"REPL",     "id":21311,   "ctx":"initandlisten","msg":"Did not find local initialized voted for document at startup"}
{"t":{"$date":"2023-12-06T08:21:26.244+00:00"},"s":"I",  "c":"REPL",     "id":4280502, "ctx":"initandlisten","msg":"Searching for local Rollback ID document"}
{"t":{"$date":"2023-12-06T08:21:26.245+00:00"},"s":"I",  "c":"REPL",     "id":21529,   "ctx":"initandlisten","msg":"Initializing rollback ID","attr":{"rbid":1}}
{"t":{"$date":"2023-12-06T08:21:26.245+00:00"},"s":"I",  "c":"REPL",     "id":501401,  "ctx":"initandlisten","msg":"Incrementing the rollback ID after unclean shutdown"}
{"t":{"$date":"2023-12-06T08:21:26.246+00:00"},"s":"I",  "c":"REPL",     "id":21532,   "ctx":"initandlisten","msg":"Incremented the rollback ID","attr":{"rbid":2}}
{"t":{"$date":"2023-12-06T08:21:26.246+00:00"},"s":"I",  "c":"REPL",     "id":21313,   "ctx":"initandlisten","msg":"Did not find local replica set configuration document at startup","attr":{"error":{"code":47,"codeName":"NoMatchingDocument","errmsg":"Did not find replica set configuration document in local.system.replset"}}}
{"t":{"$date":"2023-12-06T08:21:26.246+00:00"},"s":"I",  "c":"REPL",     "id":6015317, "ctx":"initandlisten","msg":"Setting new configuration state","attr":{"newState":"ConfigUninitialized","oldState":"ConfigStartingUp"}}
{"t":{"$date":"2023-12-06T08:21:26.246+00:00"},"s":"I",  "c":"CONTROL",  "id":20714,   "ctx":"LogicalSessionCacheRefresh","msg":"Failed to refresh session cache, will try again at the next refresh interval","attr":{"error":"NotYetInitialized: Replication has not yet been configured"}}
{"t":{"$date":"2023-12-06T08:21:26.246+00:00"},"s":"I",  "c":"REPL",     "id":40440,   "ctx":"initandlisten","msg":"Starting the TopologyVersionObserver"}
{"t":{"$date":"2023-12-06T08:21:26.246+00:00"},"s":"I",  "c":"CONTROL",  "id":20711,   "ctx":"LogicalSessionCacheReap","msg":"Failed to reap transaction table","attr":{"error":"NotYetInitialized: Replication has not yet been configured"}}
{"t":{"$date":"2023-12-06T08:21:26.246+00:00"},"s":"I",  "c":"REPL",     "id":40445,   "ctx":"TopologyVersionObserver","msg":"Started TopologyVersionObserver"}
{"t":{"$date":"2023-12-06T08:21:26.247+00:00"},"s":"I",  "c":"NETWORK",  "id":23015,   "ctx":"listener","msg":"Listening on","attr":{"address":"/opt/bitnami/mongodb/tmp/mongodb-27017.sock"}}
{"t":{"$date":"2023-12-06T08:21:26.247+00:00"},"s":"I",  "c":"NETWORK",  "id":23015,   "ctx":"listener","msg":"Listening on","attr":{"address":"0.0.0.0"}}
{"t":{"$date":"2023-12-06T08:21:26.247+00:00"},"s":"I",  "c":"NETWORK",  "id":23016,   "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}}
{"t":{"$date":"2023-12-06T08:21:26.442+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":400}}
{"t":{"$date":"2023-12-06T08:21:26.843+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":600}}
{"t":{"$date":"2023-12-06T08:21:27.001+00:00"},"s":"I",  "c":"FTDC",     "id":20631,   "ctx":"ftdc","msg":"Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost","attr":{"error":{"code":0,"codeName":"OK"}}}
{"t":{"$date":"2023-12-06T08:21:27.444+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":800}}
{"t":{"$date":"2023-12-06T08:21:28.245+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1000}}
{"t":{"$date":"2023-12-06T08:21:29.246+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1200}}
{"t":{"$date":"2023-12-06T08:21:30.447+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1400}}
{"t":{"$date":"2023-12-06T08:21:31.686+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.244.210.201:47428","uuid":"7650daa9-9ecd-4cd6-99fd-c73a43432a6c","connectionId":1,"connectionCount":1}}
{"t":{"$date":"2023-12-06T08:21:31.766+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn1","msg":"client metadata","attr":{"remote":"10.244.210.201:47428","client":"conn1","doc":{"application":{"name":"mongosh 2.0.0"},"driver":{"name":"nodejs|mongosh","version":"6.0.0|2.0.0"},"platform":"Node.js v20.5.1, LE","os":{"name":"linux","architecture":"x64","version":"5.14.0-284.30.1.el9_2.x86_64","type":"Linux"}}}}
{"t":{"$date":"2023-12-06T08:21:31.768+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.244.210.202:34152","uuid":"b4578cd5-4a64-4937-b30e-407d16337229","connectionId":2,"connectionCount":2}}
{"t":{"$date":"2023-12-06T08:21:31.773+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn2","msg":"client metadata","attr":{"remote":"10.244.210.202:34152","client":"conn2","doc":{"application":{"name":"mongosh 2.0.0"},"driver":{"name":"nodejs|mongosh","version":"6.0.0|2.0.0"},"platform":"Node.js v20.5.1, LE","os":{"name":"linux","architecture":"x64","version":"5.14.0-284.30.1.el9_2.x86_64","type":"Linux"}}}}
{"t":{"$date":"2023-12-06T08:21:31.849+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1600}}
{"t":{"$date":"2023-12-06T08:21:33.451+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1800}}
{"t":{"$date":"2023-12-06T08:21:35.252+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2000}}
{"t":{"$date":"2023-12-06T08:21:35.834+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"127.0.0.1:50376","uuid":"13b79f99-ef77-429a-b55f-eb7b0ba3a747","connectionId":3,"connectionCount":3}}
{"t":{"$date":"2023-12-06T08:21:35.840+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn3","msg":"client metadata","attr":{"remote":"127.0.0.1:50376","client":"conn3","doc":{"driver":{"name":"nodejs|mongosh","version":"4.6.0"},"os":{"type":"Linux","name":"linux","architecture":"x64","version":"5.14.0-284.30.1.el9_2.x86_64"},"platform":"Node.js v14.19.1, LE (unified)","version":"4.6.0|1.4.2","application":{"name":"mongosh 1.4.2"}}}}
{"t":{"$date":"2023-12-06T08:21:35.851+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"127.0.0.1:50382","uuid":"7f56019c-fd4f-45b6-9997-6692bdeb967f","connectionId":4,"connectionCount":4}}
{"t":{"$date":"2023-12-06T08:21:35.852+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn4","msg":"client metadata","attr":{"remote":"127.0.0.1:50382","client":"conn4","doc":{"driver":{"name":"nodejs|mongosh","version":"4.6.0"},"os":{"type":"Linux","name":"linux","architecture":"x64","version":"5.14.0-284.30.1.el9_2.x86_64"},"platform":"Node.js v14.19.1, LE (unified)","version":"4.6.0|1.4.2","application":{"name":"mongosh 1.4.2"}}}}
{"t":{"$date":"2023-12-06T08:21:35.855+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"127.0.0.1:50396","uuid":"273d5564-7993-48c6-9387-68cac689b914","connectionId":5,"connectionCount":5}}
{"t":{"$date":"2023-12-06T08:21:35.856+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn5","msg":"client metadata","attr":{"remote":"127.0.0.1:50396","client":"conn5","doc":{"driver":{"name":"nodejs|mongosh","version":"4.6.0"},"os":{"type":"Linux","name":"linux","architecture":"x64","version":"5.14.0-284.30.1.el9_2.x86_64"},"platform":"Node.js v14.19.1, LE (unified)","version":"4.6.0|1.4.2","application":{"name":"mongosh 1.4.2"}}}}
{"t":{"$date":"2023-12-06T08:21:37.254+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2200}}
{"t":{"$date":"2023-12-06T08:21:39.456+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2400}}
{"t":{"$date":"2023-12-06T08:21:41.859+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2600}}
{"t":{"$date":"2023-12-06T08:21:42.269+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.244.210.201:50440","uuid":"5f7117fd-1eba-4e0b-99b1-ccc23ded0125","connectionId":6,"connectionCount":6}}
{"t":{"$date":"2023-12-06T08:21:42.270+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn6","msg":"client metadata","attr":{"remote":"10.244.210.201:50440","client":"conn6","doc":{"application":{"name":"mongosh 2.0.0"},"driver":{"name":"nodejs|mongosh","version":"6.0.0|2.0.0"},"platform":"Node.js v20.5.1, LE","os":{"name":"linux","architecture":"x64","version":"5.14.0-284.30.1.el9_2.x86_64","type":"Linux"}}}}
{"t":{"$date":"2023-12-06T08:21:42.277+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.244.210.202:47588","uuid":"89968a3b-3b69-4cbd-89bf-3945d6e08ea1","connectionId":7,"connectionCount":7}}
{"t":{"$date":"2023-12-06T08:21:42.278+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn7","msg":"client metadata","attr":{"remote":"10.244.210.202:47588","client":"conn7","doc":{"application":{"name":"mongosh 2.0.0"},"driver":{"name":"nodejs|mongosh","version":"6.0.0|2.0.0"},"platform":"Node.js v20.5.1, LE","os":{"name":"linux","architecture":"x64","version":"5.14.0-284.30.1.el9_2.x86_64","type":"Linux"}}}}
{"t":{"$date":"2023-12-06T08:21:44.459+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2800}}
{"t":{"$date":"2023-12-06T08:21:46.356+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"127.0.0.1:37834","uuid":"bdcd0bca-c1ee-45c0-8d09-7c7ad252f065","connectionId":8,"connectionCount":8}}
{"t":{"$date":"2023-12-06T08:21:46.357+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn8","msg":"client metadata","attr":{"remote":"127.0.0.1:37834","client":"conn8","doc":{"driver":{"name":"nodejs|mongosh","version":"4.6.0"},"os":{"type":"Linux","name":"linux","architecture":"x64","version":"5.14.0-284.30.1.el9_2.x86_64"},"platform":"Node.js v14.19.1, LE (unified)","version":"4.6.0|1.4.2","application":{"name":"mongosh 1.4.2"}}}}
{"t":{"$date":"2023-12-06T08:21:47.262+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":3000}}
{"t":{"$date":"2023-12-06T08:21:50.264+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":3200}}
{"t":{"$date":"2023-12-06T08:21:53.467+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":3400}}
{"t":{"$date":"2023-12-06T08:21:56.871+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":3600}}
{"t":{"$date":"2023-12-06T08:22:00.471+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":3800}}
{"t":{"$date":"2023-12-06T08:22:01.580+00:00"},"s":"I",  "c":"-",        "id":20883,   "ctx":"conn1","msg":"Interrupted operation as its client disconnected","attr":{"opId":2998}}
{"t":{"$date":"2023-12-06T08:22:01.580+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn6","msg":"Connection ended","attr":{"remote":"10.244.210.201:50440","uuid":"5f7117fd-1eba-4e0b-99b1-ccc23ded0125","connectionId":6,"connectionCount":7}}
{"t":{"$date":"2023-12-06T08:22:01.581+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn1","msg":"Connection ended","attr":{"remote":"10.244.210.201:47428","uuid":"7650daa9-9ecd-4cd6-99fd-c73a43432a6c","connectionId":1,"connectionCount":6}}
{"t":{"$date":"2023-12-06T08:22:01.585+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn7","msg":"Connection ended","attr":{"remote":"10.244.210.202:47588","uuid":"89968a3b-3b69-4cbd-89bf-3945d6e08ea1","connectionId":7,"connectionCount":5}}
{"t":{"$date":"2023-12-06T08:22:01.585+00:00"},"s":"I",  "c":"-",        "id":20883,   "ctx":"conn2","msg":"Interrupted operation as its client disconnected","attr":{"opId":3002}}
{"t":{"$date":"2023-12-06T08:22:01.585+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn2","msg":"Connection ended","attr":{"remote":"10.244.210.202:34152","uuid":"b4578cd5-4a64-4937-b30e-407d16337229","connectionId":2,"connectionCount":4}}

and this is ths values.yaml.

upgradeAgent:
  nodeSelector:
    role: application-node
portal:
  frontend:
    service:
      type: NodePort
    nodeSelector:
      role: application-node
  server:
    nodeSelector:
      role: application-node
mongodb:
  global:
    storageClass: local-path
  auth:
    rootPassword: "1234"
  architecture: replicaset
  replicaCount: 3    
  nodeSelector:
    role: application-node
  volumePermissions:
    enabled: true
  arbiter:
    nodeSelector:
      role: application-node
Calvinaud commented 8 months ago

Hello,

Not sure if it's the same root cause but I also had an issue to deploy because of the mongo never getting ready and the pod getting restarted.

For me, the issue was coming for the readiness and liveness probe never succeeding. It was related to https://github.com/bitnami/charts/issues/10264. It seem there is a issue with the older version of mongosh.

In the current release of litmus, mongo is using the image: bitnami/mongodb:5.0.8-debian-10-r24 which include mongosh with the version 1.4.2.

In the link https://www.mongodb.com/community/forums/t/mongosh-eval-freezes-the-shell/121406/12 the last post, it said the issue happen for all version of mongosh under 1.6.1.
It also describe 3 solution. I used the last one which is to upgrade to a new version of mongosh. I currently use the tag: 5.0.23-debian-11-r7

gioppoluca commented 7 months ago

I tryed with the proposed image with no luck.

My env is k8s (rke2) 1.26.8 Chart version 3.1.0 I'm behind an enterprise proxy. Here are the values used:

# Default values for litmus.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

portalScope: cluster

nameOverride: ""

# -- Additional labels
customLabels: {}
# my.company.com/concourse-cd: 2

# -- Use existing secret (e.g., External Secrets)
existingSecret: ""

adminConfig:
  JWTSecret: "litmus-portal@123"
  VERSION: "3.1.0"
  SKIP_SSL_VERIFY: "false"
  # -- leave empty if uses Mongo DB deployed by this chart
  DBPASSWORD: ""
  DBUSER: ""
  DB_SERVER: "mongodb://litmus-prod01-mongodb-headless"
  DB_PORT: ""
  ADMIN_USERNAME: "admin"
  ADMIN_PASSWORD: "litmus"

image:
  imageRegistryName: litmuschaos.docker.scarf.sh/litmuschaos
  # Optional pod imagePullSecrets
  imagePullSecrets: []

ingress:
  enabled: true
  name: litmus-ingress
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: web,websecure
    kubernetes.io/ingress.class: traefik-csi
    # kubernetes.io/tls-acme: "true"
    # nginx.ingress.kubernetes.io/rewrite-target: /$1

  ingressClassName: ""
  host:
    # -- This is ingress hostname (ex: my-domain.com)
    name: "litmus.k8s-csi-prod01.nivolapiemonte.it"
    frontend:
      # -- You may need adapt the path depending your ingress-controller
      path: /(.*)
      # -- Allow to set [pathType](https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types) for the frontend path
      pathType: ImplementationSpecific
    backend:
      # -- You may need adapt the path depending your ingress-controller
      path: /backend/(.*)
      # -- Allow to set [pathType](https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types) for the backend path
      pathType: ImplementationSpecific
  tls: []
  #  - secretName: chart-example-tls
  #    hosts: []

upgradeAgent:
  enabled: true
  controlPlane:
    image:
      repository: upgrade-agent-cp
      tag: "3.1.0"
      pullPolicy: "Always"
    restartPolicy: OnFailure
  nodeSelector: {}
  tolerations: []
  affinity: {}
  resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
#   limits:
#     cpu: 100m
#     memory: 128Mi
#   requests:
#     cpu: 100m
#     memory: 128Mi

portal:
  frontend:
    replicas: 1
    autoscaling:
      enabled: false
      minReplicas: 2
      maxReplicas: 3
      targetCPUUtilizationPercentage: 50
      targetMemoryUtilizationPercentage: 50
    updateStrategy: {}
    ## Strategy for deployment updates.
    ##
    ## Example:
    ##
    ##   strategy:
    ##     type: RollingUpdate
    ##     rollingUpdate:
    ##       maxSurge: 1
    ##       maxUnavailable: 25%
    automountServiceAccountToken: false
    # securityContext:
    #   runAsUser: 2000
    #   allowPrivilegeEscalation: false
    #   runAsNonRoot: true
    image:
      repository: litmusportal-frontend
      tag: 3.1.0
      pullPolicy: "Always"
    containerPort: 8185
    customLabels: {}
    # my.company.com/tier: "frontend"

    resources:
      # We usually recommend not to specify default resources and to leave this as a conscious
      # choice for the user. This also increases chances charts run on environments with little
      # resources, such as Minikube. If you do want to specify resources, uncomment the following
      # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
      requests:
        memory: "150Mi"
        cpu: "125m"
        ephemeral-storage: "500Mi"
      limits:
        memory: "512Mi"
        cpu: "550m"
        ephemeral-storage: "1Gi"
    livenessProbe:
      failureThreshold: 5
      initialDelaySeconds: 30
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    readinessProbe:
      initialDelaySeconds: 5
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    service:
      annotations: {}
      type: ClusterIP
      port: 9091
      targetPort: 8185
    virtualService:
      enabled: false
      hosts: []
      gateways: []
      pathPrefixEnabled: false
    nodeSelector: {}
    tolerations: []
    affinity: {}

  server:
    replicas: 1
    updateStrategy: {}
    ## Strategy for deployment updates.
    ##
    ## Example:
    ##
    ##   strategy:
    ##     type: RollingUpdate
    ##     rollingUpdate:
    ##       maxSurge: 1
    ##       maxUnavailable: 25%
    serviceAccountName: litmus-server-account
    customLabels: {}
    # my.company.com/tier: "backend"
    waitForMongodb:
      image:
        repository: mongo
        tag: 6
        pullPolicy: "Always"
      securityContext:
        {}
        # runAsUser: 101
        # allowPrivilegeEscalation: false
        # runAsNonRoot: true
        # readOnlyRootFilesystem: true
      resources:
        # We usually recommend not to specify default resources and to leave this as a conscious
        # choice for the user. This also increases chances charts run on environments with little
        # resources, such as Minikube. If you do want to specify resources, uncomment the following
        # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
        requests:
          memory: "150Mi"
          cpu: "25m"
          ephemeral-storage: "500Mi"
        limits:
          memory: "512Mi"
          cpu: "250m"
          ephemeral-storage: "1Gi"
    graphqlServer:
      volumes:
        - name: gitops-storage
          emptyDir: {}
        - name: hub-storage
          emptyDir: {}
      volumeMounts:
        - mountPath: /tmp/
          name: gitops-storage
        - mountPath: /tmp/version
          name: hub-storage
      securityContext:
        runAsUser: 2000
        allowPrivilegeEscalation: false
        runAsNonRoot: true
        readOnlyRootFilesystem: true
      image:
        repository: litmusportal-server
        tag: 3.1.0
        pullPolicy: "Always"
      ports:
        - name: gql-server
          containerPort: 8080
        - name: gql-rpc-server
          containerPort: 8000
      service:
        annotations: {}
        type: ClusterIP
        graphqlServer:
          port: 9002
          targetPort: 8080
        graphqlRpcServer:
          port: 8000
          targetPort: 8000
      imageEnv:
        SUBSCRIBER_IMAGE: "litmusportal-subscriber:3.1.0"
        EVENT_TRACKER_IMAGE: "litmusportal-event-tracker:3.1.0"
        ARGO_WORKFLOW_CONTROLLER_IMAGE: "workflow-controller:v3.3.1"
        ARGO_WORKFLOW_EXECUTOR_IMAGE: "argoexec:v3.3.1"
        LITMUS_CHAOS_OPERATOR_IMAGE: "chaos-operator:3.1.0"
        LITMUS_CHAOS_RUNNER_IMAGE: "chaos-runner:3.1.0"
        LITMUS_CHAOS_EXPORTER_IMAGE: "chaos-exporter:3.1.0"
      genericEnv:
        TLS_SECRET_NAME: ""
        TLS_CERT_64: ""
        CONTAINER_RUNTIME_EXECUTOR: "k8sapi"
        DEFAULT_HUB_BRANCH_NAME: "v3.1.x"
        INFRA_DEPLOYMENTS: '["app=chaos-exporter", "name=chaos-operator", "app=event-tracker", "app=workflow-controller"]'
        LITMUS_AUTH_GRPC_PORT: ":3030"
        WORKFLOW_HELPER_IMAGE_VERSION: "3.1.0"
        REMOTE_HUB_MAX_SIZE: "5000000"
        INFRA_COMPATIBLE_VERSIONS: '["3.1.0"]'
        # Provide UI endpoint if using namespaced scope
        CHAOS_CENTER_UI_ENDPOINT: ""
      resources:
        # We usually recommend not to specify default resources and to leave this as a conscious
        # choice for the user. This also increases chances charts run on environments with little
        # resources, such as Minikube. If you do want to specify resources, uncomment the following
        # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
        requests:
          memory: "250Mi"
          cpu: "225m"
          ephemeral-storage: "500Mi"
        limits:
          memory: "712Mi"
          cpu: "550m"
          ephemeral-storage: "1Gi"
      livenessProbe:
        failureThreshold: 5
        initialDelaySeconds: 30
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 5
      readinessProbe:
        initialDelaySeconds: 5
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 1
    authServer:
      replicas: 1
      autoscaling:
        enabled: false
        minReplicas: 2
        maxReplicas: 3
        targetCPUUtilizationPercentage: 50
        targetMemoryUtilizationPercentage: 50
      securityContext:
        runAsUser: 2000
        allowPrivilegeEscalation: false
        runAsNonRoot: true
        readOnlyRootFilesystem: true
      automountServiceAccountToken: false
      image:
        repository: litmusportal-auth-server
        tag: 3.1.0
        pullPolicy: "Always"
      ports:
        - name: auth-server
          containerPort: 3030
        - name: auth-rpc-server
          containerPort: 3000
      service:
        annotations: {}
        type: ClusterIP
        authServer:
          port: 9003
          targetPort: 3000
        authRpcServer:
          port: 3030
          targetPort: 3030
      env:
        LITMUS_GQL_GRPC_PORT: ":8000"
      resources:
        # We usually recommend not to specify default resources and to leave this as a conscious
        # choice for the user. This also increases chances charts run on environments with little
        # resources, such as Minikube. If you do want to specify resources, uncomment the following
        # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
        requests:
          memory: "250Mi"
          cpu: "225m"
          ephemeral-storage: "500Mi"
        limits:
          memory: "712Mi"
          cpu: "550m"
          ephemeral-storage: "1Gi"
      volumeMounts: []
      volumes: []

    nodeSelector: {}
    tolerations: []
    affinity: {}

# -- Configure the Bitnami MongoDB subchart
# see values at https://github.com/bitnami/charts/blob/master/bitnami/mongodb/values.yaml
mongodb:
  enabled: true
  image:
    debug: true
    #tag: 5.0.8-debian-10-r24
    # TODO changed the tag as per post on litmus issue
    tag: 5.0.23-debian-11-r7
  auth:
    enabled: true
    rootUser: "root"
    rootPassword: "1234"
    replicaSetKey: Blablablablba
    # -- existingSecret Existing secret with MongoDB(®) credentials (keys: `mongodb-passwords`, `mongodb-root-password`, `mongodb-metrics-password`, ` mongodb-replica-set-key`)
    existingSecret: ""
  architecture: replicaset
  replicaCount: 3
  persistence:
    enabled: true
    storageClass: "csi-storage-nas"
  volumePermissions:
    enabled: true
  metrics:
    enabled: false
    prometheusRule:
      enabled: false
  customStartupProbe:
    initialDelaySeconds: 5
    periodSeconds: 20
    timeoutSeconds: 10
    successThreshold: 1
    failureThreshold: 30
    exec:
      command:
        - sh
        - -c
        - |
          mongosh --nodb --eval "disableTelemetry()"
          /bitnami/scripts/startup-probe.sh
  arbiter:
    customStartupProbe:
      initialDelaySeconds: 5
      periodSeconds: 20
      timeoutSeconds: 10
      successThreshold: 1
      failureThreshold: 30
      exec:
        command:
          - sh
          - -c
          - |
            mongosh --nodb --eval "disableTelemetry()"
  #           /bitnami/scripts/startup-probe.sh

Logs from the arbiter:

mongodb 08:50:42.29 INFO  ==> 
2024-03-15T09:50:42.295159739+01:00 mongodb 08:50:42.29 INFO  ==> Welcome to the Bitnami mongodb container
2024-03-15T09:50:42.296505375+01:00 mongodb 08:50:42.29 INFO  ==> Subscribe to project updates by watching https://github.com/bitnami/containers
2024-03-15T09:50:42.297895529+01:00 mongodb 08:50:42.29 INFO  ==> Submit issues and feature requests at https://github.com/bitnami/containers/issues
2024-03-15T09:50:42.299243104+01:00 mongodb 08:50:42.29 INFO  ==> 
mongodb 08:50:42.30 INFO  ==> ** Starting MongoDB setup **
2024-03-15T09:50:42.315436143+01:00 mongodb 08:50:42.31 INFO  ==> Validating settings in MONGODB_* env vars...
2024-03-15T09:50:42.352489000+01:00 mongodb 08:50:42.35 INFO  ==> Initializing MongoDB...
2024-03-15T09:50:42.391994260+01:00 mongodb 08:50:42.39 INFO  ==> Writing keyfile for replica set authentication...
2024-03-15T09:50:42.400499164+01:00 mongodb 08:50:42.40 INFO  ==> Deploying MongoDB from scratch...
mongodb 08:50:42.40 DEBUG ==> Starting MongoDB in background...
2024-03-15T09:50:42.429154640+01:00 about to fork child process, waiting until server is ready for connections.
2024-03-15T09:50:42.430277593+01:00 forked process: 52
2024-03-15T09:50:43.042899555+01:00 child process started successfully, parent exiting
2024-03-15T09:50:43.510483293+01:00 MongoNetworkError: connect ECONNREFUSED 10.42.5.195:27017
2024-03-15T09:50:43.519287180+01:00 mongodb 08:50:43.51 INFO  ==> Creating users...
mongodb 08:50:43.52 INFO  ==> Users created
2024-03-15T09:50:43.538146600+01:00 mongodb 08:50:43.53 INFO  ==> Configuring MongoDB replica set...
2024-03-15T09:50:43.544561571+01:00 mongodb 08:50:43.54 INFO  ==> Stopping MongoDB...
2024-03-15T09:50:44.554157529+01:00 mongodb 08:50:44.55 DEBUG ==> Starting MongoDB in background...
2024-03-15T09:50:44.580869195+01:00 about to fork child process, waiting until server is ready for connections.
2024-03-15T09:50:44.582002932+01:00 forked process: 140
child process started successfully, parent exiting
2024-03-15T09:50:46.542079135+01:00 mongodb 08:50:46.54 DEBUG ==> Waiting for primary node...
2024-03-15T09:50:46.543715214+01:00 mongodb 08:50:46.54 DEBUG ==> Waiting for primary node...
2024-03-15T09:50:46.545308003+01:00 mongodb 08:50:46.54 INFO  ==> Trying to connect to MongoDB server litmus-prod01-mongodb-0.litmus-prod01-mongodb-headless.litmus.svc.cluster.local...
2024-03-15T09:50:46.551996683+01:00 mongodb 08:50:46.55 INFO  ==> Found MongoDB server listening at litmus-prod01-mongodb-0.litmus-prod01-mongodb-headless.litmus.svc.cluster.local:27017 !
2024-03-15T09:51:27.409527026+01:00 mongodb 08:51:27.40 ERROR ==> Node litmus-prod01-mongodb-0.litmus-prod01-mongodb-headless.litmus.svc.cluster.local did not become available
2024-03-15T09:51:27.414338512+01:00 mongodb 08:51:27.41 INFO  ==> Stopping MongoDB...