tuetenk0pp / sharelatex-full

Overleaf docker image with all packages available to tlmgr
https://hub.docker.com/r/tuetenk0pp/sharelatex-full
GNU General Public License v3.0
65 stars 13 forks source link

After update all files and accounts are gone #28

Closed armanhamzehlou closed 1 year ago

armanhamzehlou commented 1 year ago

Describe the bug After an electricity cut, the docker crashed and no longer would load.

To Reproduce I refered to the documentation on the repository and use the YML to update the dockers. Before hand made backup of "mongo_data", "sharelatex_data" and "dist_data" folder. I also added the fix discussed in "Cannot connect to mongodb #25." Now the website loads.

Expected behavior I was hoping that by restoring the content of the old folders "mongo_data", "sharelatex_data" and "dist_data", i get my account and data back But i have to register for a new account and it is plain.

Desktop (please complete the following information):

tuetenk0pp commented 1 year ago

Can you post your configuration files (old and new) along with logs and output of docker volume ls. This does sound indeed very painful.

armanhamzehlou commented 1 year ago

Right now it is working fine except that there is no data in there any more. So does the log help? This is the new file:

`# This is meant for use in development, use the method described in README.md for deployment instead. version: '2.2' services: sharelatex: restart: always image: tuetenk0pp/sharelatex-full:latest container_name: sharelatex-full depends_on: mongo: condition: service_healthy redis: condition: service_started ports:

volumes: sharelatex_data: mongo_data: redis_data:`

The old file was exactly like above, however the mongo version was 4. I upgraded after 1 year.

Are there anyways to recover the files in "data/user_files" in the sharelatex_data?

This is what i have for log in mongo container:

2023-07-11 00:45:20 {"t":{"$date":"2023-07-11T04:45:20.754+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"} 2023-07-11 00:45:20 {"t":{"$date":"2023-07-11T04:45:20.767+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."} 2023-07-11 00:45:20 {"t":{"$date":"2023-07-11T04:45:20.768+00:00"},"s":"I", "c":"STORAGE", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"9a8e415ccc0e"}} 2023-07-11 00:45:20 {"t":{"$date":"2023-07-11T04:45:20.768+00:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.22","gitVersion":"fc832685b99221cffb1f5bb5a4ff5ad3e1c416b2","openSSLVersion":"OpenSSL 1.1.1f 31 Mar 2020","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu2004","distarch":"x86_64","target_arch":"x86_64"}}}} 2023-07-11 00:45:20 {"t":{"$date":"2023-07-11T04:45:20.768+00:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"20.04"}}} 2023-07-11 00:45:20 {"t":{"$date":"2023-07-11T04:45:20.768+00:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"},"replication":{"replSet":"overleaf"}}}} 2023-07-11 00:45:20 {"t":{"$date":"2023-07-11T04:45:20.771+00:00"},"s":"W", "c":"STORAGE", "id":22271, "ctx":"initandlisten","msg":"Detected unclean shutdown - Lock file is not empty","attr":{"lockFile":"/data/db/mongod.lock"}} 2023-07-11 00:45:20 {"t":{"$date":"2023-07-11T04:45:20.772+00:00"},"s":"I", "c":"STORAGE", "id":22270, "ctx":"initandlisten","msg":"Storage engine to use detected by data files","attr":{"dbpath":"/data/db","storageEngine":"wiredTiger"}} 2023-07-11 00:45:20 {"t":{"$date":"2023-07-11T04:45:20.772+00:00"},"s":"W", "c":"STORAGE", "id":22302, "ctx":"initandlisten","msg":"Recovering data from the last clean checkpoint."} 2023-07-11 00:45:20 {"t":{"$date":"2023-07-11T04:45:20.772+00:00"},"s":"I", "c":"STORAGE", "id":22297, "ctx":"initandlisten","msg":"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem","tags":["startupWarnings"]} 2023-07-11 00:45:20 {"t":{"$date":"2023-07-11T04:45:20.772+00:00"},"s":"I", "c":"STORAGE", "id":22315, "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=5539M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}} 2023-07-11 00:45:21 {"t":{"$date":"2023-07-11T04:45:21.805+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1689050721:805259][1:0x7feb4bfd9cc0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 5 through 6"}} 2023-07-11 00:45:22 {"t":{"$date":"2023-07-11T04:45:22.377+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1689050722:377234][1:0x7feb4bfd9cc0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 6 through 6"}} 2023-07-11 00:45:22 {"t":{"$date":"2023-07-11T04:45:22.809+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1689050722:809941][1:0x7feb4bfd9cc0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Main recovery loop: starting at 5/31488 to 6/256"}} 2023-07-11 00:45:22 {"t":{"$date":"2023-07-11T04:45:22.811+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1689050722:811042][1:0x7feb4bfd9cc0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 5 through 6"}} 2023-07-11 00:45:22 {"t":{"$date":"2023-07-11T04:45:22.969+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1689050722:969880][1:0x7feb4bfd9cc0], file:collection-2-3806739466930613589.wt, txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 6 through 6"}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.098+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1689050723:98378][1:0x7feb4bfd9cc0], file:collection-2-3806739466930613589.wt, txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (1689050495, 3)"}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.098+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1689050723:98455][1:0x7feb4bfd9cc0], file:collection-2-3806739466930613589.wt, txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (1689050490, 3)"}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.113+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1689050723:113547][1:0x7feb4bfd9cc0], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 33, snapshot max: 33 snapshot count: 0, oldest timestamp: (1689050490, 3) , meta checkpoint timestamp: (1689050495, 3) base write gen: 73"}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.217+00:00"},"s":"I", "c":"STORAGE", "id":4795906, "ctx":"initandlisten","msg":"WiredTiger opened","attr":{"durationMillis":2445}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.217+00:00"},"s":"I", "c":"RECOVERY", "id":23987, "ctx":"initandlisten","msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":1689050495,"i":3}}}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.228+00:00"},"s":"I", "c":"STORAGE", "id":22383, "ctx":"initandlisten","msg":"The size storer reports that the oplog contains","attr":{"numRecords":0,"dataSize":0}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.228+00:00"},"s":"I", "c":"STORAGE", "id":22382, "ctx":"initandlisten","msg":"WiredTiger record store oplog processing finished","attr":{"durationMillis":0}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.230+00:00"},"s":"I", "c":"STORAGE", "id":22262, "ctx":"initandlisten","msg":"Timestamp monitor starting"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.286+00:00"},"s":"W", "c":"CONTROL", "id":22120, "ctx":"initandlisten","msg":"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted","tags":["startupWarnings"]} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.286+00:00"},"s":"W", "c":"CONTROL", "id":22178, "ctx":"initandlisten","msg":"/sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'","tags":["startupWarnings"]} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.289+00:00"},"s":"I", "c":"STORAGE", "id":22251, "ctx":"initandlisten","msg":"Dropping unknown ident","attr":{"ident":"collection-13-7644112833784962988"}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.294+00:00"},"s":"I", "c":"STORAGE", "id":22251, "ctx":"initandlisten","msg":"Dropping unknown ident","attr":{"ident":"index-14-7644112833784962988"}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.331+00:00"},"s":"I", "c":"STORAGE", "id":20536, "ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.338+00:00"},"s":"I", "c":"SHARDING", "id":20997, "ctx":"initandlisten","msg":"Refreshed RWC defaults","attr":{"newDefaults":{}}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.351+00:00"},"s":"I", "c":"FTDC", "id":20625, "ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"/data/db/diagnostic.data"}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.359+00:00"},"s":"I", "c":"REPL", "id":6015317, "ctx":"initandlisten","msg":"Setting new configuration state","attr":{"newState":"ConfigStartingUp","oldState":"ConfigPreStart"}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.360+00:00"},"s":"I", "c":"REPL", "id":4280500, "ctx":"initandlisten","msg":"Attempting to create internal replication collections"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.374+00:00"},"s":"I", "c":"REPL", "id":4280501, "ctx":"initandlisten","msg":"Attempting to load local voted for document"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.374+00:00"},"s":"I", "c":"REPL", "id":4280502, "ctx":"initandlisten","msg":"Searching for local Rollback ID document"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.379+00:00"},"s":"I", "c":"REPL", "id":21529, "ctx":"initandlisten","msg":"Initializing rollback ID","attr":{"rbid":1}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.379+00:00"},"s":"I", "c":"REPL", "id":501401, "ctx":"initandlisten","msg":"Incrementing the rollback ID after unclean shutdown"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.384+00:00"},"s":"I", "c":"REPL", "id":21532, "ctx":"initandlisten","msg":"Incremented the rollback ID","attr":{"rbid":2}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.391+00:00"},"s":"I", "c":"REPL", "id":4280504, "ctx":"initandlisten","msg":"Cleaning up any partially applied oplog batches & reading last op from oplog"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.392+00:00"},"s":"I", "c":"REPL", "id":21557, "ctx":"initandlisten","msg":"Removing unapplied oplog entries after oplogTruncateAfterPoint","attr":{"oplogTruncateAfterPoint":{"":{"$timestamp":{"t":1689050495,"i":6}}}}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.392+00:00"},"s":"I", "c":"REPL", "id":21553, "ctx":"initandlisten","msg":"Truncating oplog from truncateAfterOplogEntryTimestamp (non-inclusive)","attr":{"truncateAfterOplogEntryTimestamp":{"$timestamp":{"t":1689050495,"i":6}},"oplogTruncateAfterPoint":{"$timestamp":{"t":1689050495,"i":6}}}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.392+00:00"},"s":"I", "c":"REPL", "id":21554, "ctx":"initandlisten","msg":"Replication recovery oplog truncation finished","attr":{"durationMillis":0}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.411+00:00"},"s":"I", "c":"REPL", "id":21544, "ctx":"initandlisten","msg":"Recovering from stable timestamp","attr":{"stableTimestamp":{"$timestamp":{"t":1689050495,"i":3}},"topOfOplog":{"ts":{"$timestamp":{"t":1689050495,"i":6}},"t":1},"appliedThrough":{"ts":{"$timestamp":{"t":0,"i":0}},"t":-1},"oplogTruncateAfterPoint":{"$timestamp":{"t":0,"i":0}}}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.411+00:00"},"s":"I", "c":"REPL", "id":21545, "ctx":"initandlisten","msg":"Starting recovery oplog application at the stable timestamp","attr":{"stableTimestamp":{"$timestamp":{"t":1689050495,"i":3}}}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.411+00:00"},"s":"I", "c":"REPL", "id":21550, "ctx":"initandlisten","msg":"Replaying stored operations from startPoint (inclusive) to endPoint (inclusive)","attr":{"startPoint":{"$timestamp":{"t":1689050495,"i":3}},"endPoint":{"$timestamp":{"t":1689050495,"i":6}}}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.419+00:00"},"s":"I", "c":"STORAGE", "id":20320, "ctx":"ReplWriterWorker-0","msg":"createCollection","attr":{"namespace":"admin.system.keys","uuidDisposition":"provided","uuid":{"uuid":{"$uuid":"d7c3936c-9624-45fe-928f-3bc964a8968a"}},"options":{"uuid":{"$uuid":"d7c3936c-9624-45fe-928f-3bc964a8968a"}}}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.499+00:00"},"s":"I", "c":"INDEX", "id":20345, "ctx":"ReplWriterWorker-0","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"admin.system.keys","index":"_id_","commitTimestamp":{"$timestamp":{"t":1689050495,"i":4}}}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.500+00:00"},"s":"I", "c":"REPL", "id":21536, "ctx":"initandlisten","msg":"Completed oplog application for recovery","attr":{"numOpsApplied":3,"numBatches":2,"applyThroughOpTime":{"ts":{"$timestamp":{"t":1689050495,"i":6}},"t":1}}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.500+00:00"},"s":"I", "c":"REPL", "id":4280506, "ctx":"initandlisten","msg":"Reconstructing prepared transactions"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.501+00:00"},"s":"I", "c":"REPL", "id":4280507, "ctx":"initandlisten","msg":"Loaded replica set config, scheduled callback to set local config"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.501+00:00"},"s":"I", "c":"REPL", "id":4280508, "ctx":"ReplCoord-0","msg":"Attempting to set local replica set config; validating config for startup"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.515+00:00"},"s":"I", "c":"CONTROL", "id":20714, "ctx":"LogicalSessionCacheRefresh","msg":"Failed to refresh session cache, will try again at the next refresh interval","attr":{"error":"NotYetInitialized: Replication has not yet been configured"}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.516+00:00"},"s":"I", "c":"REPL", "id":4280509, "ctx":"ReplCoord-0","msg":"Local configuration validated for startup"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.516+00:00"},"s":"I", "c":"REPL", "id":6015317, "ctx":"ReplCoord-0","msg":"Setting new configuration state","attr":{"newState":"ConfigSteady","oldState":"ConfigStartingUp"}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.519+00:00"},"s":"I", "c":"REPL", "id":40440, "ctx":"initandlisten","msg":"Starting the TopologyVersionObserver"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.520+00:00"},"s":"I", "c":"REPL", "id":40445, "ctx":"TopologyVersionObserver","msg":"Started TopologyVersionObserver"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.521+00:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"/tmp/mongodb-27017.sock"}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.521+00:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"0.0.0.0"}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.521+00:00"},"s":"I", "c":"NETWORK", "id":23016, "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.522+00:00"},"s":"I", "c":"REPL", "id":21392, "ctx":"ReplCoord-0","msg":"New replica set config in use","attr":{"config":{"_id":"overleaf","version":1,"term":1,"protocolVersion":1,"writeConcernMajorityJournalDefault":true,"members":[{"_id":0,"host":"mongo:27017","arbiterOnly":false,"buildIndexes":true,"hidden":false,"priority":1.0,"tags":{},"slaveDelay":0,"votes":1}],"settings":{"chainingAllowed":true,"heartbeatIntervalMillis":2000,"heartbeatTimeoutSecs":10,"electionTimeoutMillis":10000,"catchUpTimeoutMillis":-1,"catchUpTakeoverDelayMillis":30000,"getLastErrorModes":{},"getLastErrorDefaults":{"w":1,"wtimeout":0},"replicaSetId":{"$oid":"64acdd7ee9d4bb49ae4997aa"}}}}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.522+00:00"},"s":"I", "c":"REPL", "id":21393, "ctx":"ReplCoord-0","msg":"Found self in config","attr":{"hostAndPort":"mongo:27017"}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.522+00:00"},"s":"I", "c":"REPL", "id":21358, "ctx":"ReplCoord-0","msg":"Replica set state transition","attr":{"newState":"STARTUP2","oldState":"STARTUP"}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.523+00:00"},"s":"I", "c":"REPL", "id":21320, "ctx":"ReplCoord-0","msg":"Updated term","attr":{"term":1}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.523+00:00"},"s":"I", "c":"REPL", "id":21306, "ctx":"ReplCoord-0","msg":"Starting replication storage threads"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.525+00:00"},"s":"I", "c":"CONTROL", "id":20712, "ctx":"LogicalSessionCacheReap","msg":"Sessions collection is not set up; waiting until next sessions reap interval","attr":{"error":"NamespaceNotFound: config.system.sessions does not exist"}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.528+00:00"},"s":"I", "c":"REPL", "id":4280512, "ctx":"ReplCoord-0","msg":"No initial sync required. Attempting to begin steady replication"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.528+00:00"},"s":"I", "c":"REPL", "id":21358, "ctx":"ReplCoord-0","msg":"Replica set state transition","attr":{"newState":"RECOVERING","oldState":"STARTUP2"}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.529+00:00"},"s":"I", "c":"REPL", "id":21299, "ctx":"ReplCoord-0","msg":"Starting replication fetcher thread"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.529+00:00"},"s":"I", "c":"REPL", "id":21300, "ctx":"ReplCoord-0","msg":"Starting replication applier thread"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.529+00:00"},"s":"I", "c":"REPL", "id":21224, "ctx":"OplogApplier-0","msg":"Starting oplog application"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.538+00:00"},"s":"I", "c":"REPL", "id":21358, "ctx":"OplogApplier-0","msg":"Replica set state transition","attr":{"newState":"SECONDARY","oldState":"RECOVERING"}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.539+00:00"},"s":"I", "c":"ELECTION", "id":4615652, "ctx":"OplogApplier-0","msg":"Starting an election, since we've seen no PRIMARY in election timeout period","attr":{"electionTimeoutPeriodMillis":10000}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.539+00:00"},"s":"I", "c":"ELECTION", "id":21438, "ctx":"OplogApplier-0","msg":"Conducting a dry run election to see if we could be elected","attr":{"currentTerm":1}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.540+00:00"},"s":"I", "c":"REPL", "id":21301, "ctx":"ReplCoord-0","msg":"Starting replication reporter thread"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.540+00:00"},"s":"I", "c":"ELECTION", "id":21444, "ctx":"ReplCoord-1","msg":"Dry election run succeeded, running for election","attr":{"newTerm":2}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.541+00:00"},"s":"I", "c":"ELECTION", "id":6015300, "ctx":"ReplCoord-1","msg":"Storing last vote document in local storage for my election","attr":{"lastVote":{"term":2,"candidateIndex":0}}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.556+00:00"},"s":"I", "c":"ELECTION", "id":21450, "ctx":"ReplCoord-1","msg":"Election succeeded, assuming primary role","attr":{"term":2}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.557+00:00"},"s":"I", "c":"REPL", "id":21358, "ctx":"ReplCoord-1","msg":"Replica set state transition","attr":{"newState":"PRIMARY","oldState":"SECONDARY"}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.557+00:00"},"s":"I", "c":"REPL", "id":21106, "ctx":"ReplCoord-1","msg":"Resetting sync source to empty","attr":{"previousSyncSource":":27017"}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.557+00:00"},"s":"I", "c":"REPL", "id":21359, "ctx":"ReplCoord-1","msg":"Entering primary catch-up mode"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.557+00:00"},"s":"I", "c":"REPL", "id":6015304, "ctx":"ReplCoord-1","msg":"Skipping primary catchup since we are the only node in the replica set."} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.557+00:00"},"s":"I", "c":"REPL", "id":21363, "ctx":"ReplCoord-1","msg":"Exited primary catch-up mode"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.557+00:00"},"s":"I", "c":"REPL", "id":21107, "ctx":"ReplCoord-1","msg":"Stopping replication producer"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.557+00:00"},"s":"I", "c":"REPL", "id":21239, "ctx":"ReplBatcher","msg":"Oplog buffer has been drained","attr":{"term":2}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.558+00:00"},"s":"I", "c":"REPL", "id":21239, "ctx":"ReplBatcher","msg":"Oplog buffer has been drained","attr":{"term":2}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.558+00:00"},"s":"I", "c":"REPL", "id":21343, "ctx":"RstlKillOpThread","msg":"Starting to kill user operations"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.558+00:00"},"s":"I", "c":"REPL", "id":21344, "ctx":"RstlKillOpThread","msg":"Stopped killing user operations"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.558+00:00"},"s":"I", "c":"REPL", "id":21340, "ctx":"RstlKillOpThread","msg":"State transition ops metrics","attr":{"metrics":{"lastStateTransition":"stepUp","userOpsKilled":0,"userOpsRunning":0}}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.558+00:00"},"s":"I", "c":"REPL", "id":4508103, "ctx":"OplogApplier-0","msg":"Increment the config term via reconfig"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.558+00:00"},"s":"I", "c":"REPL", "id":6015313, "ctx":"OplogApplier-0","msg":"Replication config state is Steady, starting reconfig"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.558+00:00"},"s":"I", "c":"REPL", "id":6015317, "ctx":"OplogApplier-0","msg":"Setting new configuration state","attr":{"newState":"ConfigReconfiguring","oldState":"ConfigSteady"}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.558+00:00"},"s":"I", "c":"REPL", "id":21353, "ctx":"OplogApplier-0","msg":"replSetReconfig config object parses ok","attr":{"numMembers":1}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.558+00:00"},"s":"I", "c":"REPL", "id":51814, "ctx":"OplogApplier-0","msg":"Persisting new config to disk"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.563+00:00"},"s":"I", "c":"REPL", "id":6015315, "ctx":"OplogApplier-0","msg":"Persisted new config to disk"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.563+00:00"},"s":"I", "c":"REPL", "id":6015317, "ctx":"OplogApplier-0","msg":"Setting new configuration state","attr":{"newState":"ConfigSteady","oldState":"ConfigReconfiguring"}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.563+00:00"},"s":"I", "c":"REPL", "id":21392, "ctx":"OplogApplier-0","msg":"New replica set config in use","attr":{"config":{"_id":"overleaf","version":1,"term":2,"protocolVersion":1,"writeConcernMajorityJournalDefault":true,"members":[{"_id":0,"host":"mongo:27017","arbiterOnly":false,"buildIndexes":true,"hidden":false,"priority":1.0,"tags":{},"slaveDelay":0,"votes":1}],"settings":{"chainingAllowed":true,"heartbeatIntervalMillis":2000,"heartbeatTimeoutSecs":10,"electionTimeoutMillis":10000,"catchUpTimeoutMillis":-1,"catchUpTakeoverDelayMillis":30000,"getLastErrorModes":{},"getLastErrorDefaults":{"w":1,"wtimeout":0},"replicaSetId":{"$oid":"64acdd7ee9d4bb49ae4997aa"}}}}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.563+00:00"},"s":"I", "c":"REPL", "id":21393, "ctx":"OplogApplier-0","msg":"Found self in config","attr":{"hostAndPort":"mongo:27017"}} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.563+00:00"},"s":"I", "c":"REPL", "id":6015310, "ctx":"OplogApplier-0","msg":"Starting to transition to primary."} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.564+00:00"},"s":"I", "c":"REPL", "id":6015309, "ctx":"OplogApplier-0","msg":"Logging transition to primary to oplog on stepup"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.564+00:00"},"s":"I", "c":"STORAGE", "id":20657, "ctx":"OplogApplier-0","msg":"IndexBuildsCoordinator::onStepUp - this node is stepping up to primary"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.564+00:00"},"s":"I", "c":"REPL", "id":21331, "ctx":"OplogApplier-0","msg":"Transition to primary complete; database writes are now permitted"} 2023-07-11 00:45:23 {"t":{"$date":"2023-07-11T04:45:23.564+00:00"},"s":"I", "c":"REPL", "id":6015306, "ctx":"OplogApplier-0","msg":"Applier already left draining state, exiting."} 2023-07-11 00:45:24 {"t":{"$date":"2023-07-11T04:45:24.002+00:00"},"s":"I", "c":"FTDC", "id":20631, "ctx":"ftdc","msg":"Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost","attr":{"error":{"code":0,"codeName":"OK"}}} 2023-07-11 00:45:30 {"t":{"$date":"2023-07-11T04:45:30.514+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"127.0.0.1:51868","connectionId":1,"connectionCount":1}} 2023-07-11 00:45:30 {"t":{"$date":"2023-07-11T04:45:30.515+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn1","msg":"client metadata","attr":{"remote":"127.0.0.1:51868","client":"conn1","doc":{"application":{"name":"MongoDB Shell"},"driver":{"name":"MongoDB Internal Client","version":"4.4.22"},"os":{"type":"Linux","name":"Ubuntu","architecture":"x86_64","version":"20.04"}}}} 2023-07-11 00:45:30 {"t":{"$date":"2023-07-11T04:45:30.548+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn1","msg":"Connection ended","attr":{"remote":"127.0.0.1:51868","connectionId":1,"connectionCount":0}} 2023-07-11 00:45:32 {"t":{"$date":"2023-07-11T04:45:32.555+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.18.0.4:53790","connectionId":2,"connectionCount":1}} 2023-07-11 00:45:32 {"t":{"$date":"2023-07-11T04:45:32.558+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn2","msg":"client metadata","attr":{"remote":"172.18.0.4:53790","client":"conn2","doc":{"application":{"name":"MongoDB Shell"},"driver":{"name":"MongoDB Internal Client","version":"4.4.22"},"os":{"type":"Linux","name":"Ubuntu","architecture":"x86_64","version":"20.04"}}}} 2023-07-11 00:45:32 {"t":{"$date":"2023-07-11T04:45:32.568+00:00"},"s":"I", "c":"REPL", "id":21356, "ctx":"conn2","msg":"replSetInitiate admin command received from client"} 2023-07-11 00:45:32 {"t":{"$date":"2023-07-11T04:45:32.580+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn2","msg":"Connection ended","attr":{"remote":"172.18.0.4:53790","connectionId":2,"connectionCount":0}} 2023-07-11 00:45:34 {"t":{"$date":"2023-07-11T04:45:34.817+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.18.0.5:53382","connectionId":3,"connectionCount":1}} 2023-07-11 00:45:34 {"t":{"$date":"2023-07-11T04:45:34.831+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn3","msg":"client metadata","attr":{"remote":"172.18.0.5:53382","client":"conn3","doc":{"driver":{"name":"nodejs|Mongoose","version":"4.13.0"},"os":{"type":"Linux","name":"linux","architecture":"x64","version":"5.10.16.3-microsoft-standard-WSL2"},"platform":"Node.js v16.20.1, LE (unified)","version":"4.13.0|6.9.1","application":{"name":"web"}}}} 2023-07-11 00:45:34 {"t":{"$date":"2023-07-11T04:45:34.848+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.18.0.5:53384","connectionId":4,"connectionCount":2}} 2023-07-11 00:45:34 {"t":{"$date":"2023-07-11T04:45:34.848+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn4","msg":"client metadata","attr":{"remote":"172.18.0.5:53384","client":"conn4","doc":{"driver":{"name":"nodejs|Mongoose","version":"4.13.0"},"os":{"type":"Linux","name":"linux","architecture":"x64","version":"5.10.16.3-microsoft-standard-WSL2"},"platform":"Node.js v16.20.1, LE (unified)","version":"4.13.0|6.9.1","application":{"name":"web"}}}} 2023-07-11 00:45:34 {"t":{"$date":"2023-07-11T04:45:34.877+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn4","msg":"Connection ended","attr":{"remote":"172.18.0.5:53384","connectionId":4,"connectionCount":1}} 2023-07-11 00:45:34 {"t":{"$date":"2023-07-11T04:45:34.877+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn3","msg":"Connection ended","attr":{"remote":"172.18.0.5:53382","connectionId":3,"connectionCount":0}} 2023-07-11 00:45:37 {"t":{"$date":"2023-07-11T04:45:37.928+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.18.0.5:53390","connectionId":5,"connectionCount":1}} 2023-07-11 00:45:37 {"t":{"$date":"2023-07-11T04:45:37.940+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn5","msg":"client metadata","attr":{"remote":"172.18.0.5:53390","client":"conn5","doc":{"driver":{"name":"nodejs|Mongoose","version":"4.13.0"},"os":{"type":"Linux","name":"linux","architecture":"x64","version":"5.10.16.3-microsoft-standard-WSL2"},"platform":

tuetenk0pp commented 1 year ago

volumes: sharelatex_data: mongo_data: redis_data:

Is this the output of the command docker volume ls? I'm asking because I wonder if you actually passed the right volumes. After an update, containers would do migration steps on the data but never delete it entirely. So there must be something there.

Also, so you have backups of your docker volumes?

armanhamzehlou commented 1 year ago

volumes: sharelatex_data: mongo_data: redis_data:

Is this the output of the command docker volume ls? I'm asking because I wonder if you actually passed the right volumes. After an update, containers would do migration steps on the data but never delete it entirely. So there must be something there.

Also, so you have backups of your docker volumes?

Ok, You are right i have something like this :

local 0c12852b068b164e78a0f547b0815f20224c0643d7fc8297d5f8bc56d3916aba local 1e8af0fb8bd7a0cb9709697edbe2b028d336daed5a8b562d3bf7613e2c545b57 local 4eb946b8404f01cf91648db305dca82e21fddcc6c3697cb37409ebfa84107309 local 8a3bbe4f70632f39874a4611dde80ae4a9530c40b61250dc43e8d0567459bb3a local 19c8cb3c4c993e221ade82890c69150f8ce436d31753452c85ad91c60a2ed18f local 79d2cf5a79d94d4c3be285b34e39ce47fb8a6138d1779417cea235dea7bafd76 local 221451bebb1b5496ae473a301f7a0f41e78724fd064bcb6b04cc78340a6462e4 local a2ebecf7a8009c3ab1431fc5be117ad5139f581b8805323826b406ddc6103ea1 local bc6a57e7ae80fadff3588a0385bfe74922bf7b04053415df5d6ee4bb9a47c2a0 local bf63a78aad4787b558710820b5d598126964e359eaeefb838db5a377a39d6ef8 local ce4a4a4adca16d651a11317f677296a3a29ec8177670ba71eb036be5a75ee48e local cfddaf243ed2c752ad883f90db989407188ea485a5584845cc23460a0dedd503 local mongo_data (Not in use - all content modified 4 hours ago) local sharelatex-full-master_mongo_data (In Use - Actively being modified) local sharelatex-full-master_redis_data local sharelatex-full-master_sharelatex_data

My understanding is that i did not put my files in the correct volume. Since the updated solution based on "Cannot connect to mongodb #25" introduces a new volume.

tuetenk0pp commented 1 year ago

I suppose you have to find the right volumes then...

Once you have them: there is a way to rename volumes by copying them. Ask chatgpt about it. And you should make regular updates of your volumes. Also: consider using the Overleaf Toolkit as this is the officially supported way to run Overleaf.

armanhamzehlou commented 1 year ago

Thank you so much, Overleaf Toolkit has shell scripts in it that i cannot use in windows. But I suppose renaming the volume is the way to go.

tuetenk0pp commented 1 year ago

I see. There's always WSL in that case ;)

I will mark this as closed if you agree.