DigitalSlideArchive / digital_slide_archive

The official deployment of the Digital Slide Archive and HistomicsTK.
https://digitalslidearchive.github.io
Apache License 2.0
105 stars 49 forks source link

Authentication failure #276

Open ds2268 opened 1 year ago

ds2268 commented 1 year ago

I have migrated an old DSA (1 year+) deployment to the new one by copying the .dsa directory to the new VM and using deploy.sh script:  ./deploy.sh -j 4 --assetstore PATH/.dsa/assetstore/ --cache 8192 -d PATH/.dsa/db/ --logs PATH/.dsa/logs --port 9090 --user "USER" --password "PASS" --cli start

I get the following error:

...
TASK [provision : Wait for girder startup] *************************************
ok: [localhost]
TASK [provision : Wait for girder to report version] ***************************
ok: [localhost]

TASK [provision : Set worker broker setting] ***********************************

fatal: [localhost]: FAILED! => {"changed": false, "msg": "Could not Authenticate!"}
PLAY RECAP *********************************************************************
localhost                  : ok=12   changed=5    unreachable=0    failed=1    skipped=80   rescued=0    ignored=0

Any idea?

dsa_mongodb looks fine:

{"t":{"$date":"2023-07-11T16:45:52.252+00:00"},"s":"I",  "c":"NETWORK",  "id":4915701, "ctx":"-","msg":"Initialized wire specification","attr":{"spec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":17},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":17},"outgoing":{"minWireVersion":6,"maxWireVersion":17},"isInternalClient":true}}}
{"t":{"$date":"2023-07-11T16:45:52.256+00:00"},"s":"I",  "c":"CONTROL",  "id":23285,   "ctx":"-","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2023-07-11T16:45:52.258+00:00"},"s":"I",  "c":"NETWORK",  "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2023-07-11T16:45:52.260+00:00"},"s":"I",  "c":"REPL",     "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationDonorService","namespace":"config.tenantMigrationDonors"}}
{"t":{"$date":"2023-07-11T16:45:52.260+00:00"},"s":"I",  "c":"REPL",     "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationRecipientService","namespace":"config.tenantMigrationRecipients"}}
{"t":{"$date":"2023-07-11T16:45:52.260+00:00"},"s":"I",  "c":"REPL",     "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"ShardSplitDonorService","namespace":"config.tenantSplitDonors"}}
{"t":{"$date":"2023-07-11T16:45:52.260+00:00"},"s":"I",  "c":"CONTROL",  "id":5945603, "ctx":"main","msg":"Multi threading initialized"}
{"t":{"$date":"2023-07-11T16:45:52.261+00:00"},"s":"I",  "c":"CONTROL",  "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"mongodb"}}
{"t":{"$date":"2023-07-11T16:45:52.261+00:00"},"s":"I",  "c":"CONTROL",  "id":23403,   "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"6.0.7","gitVersion":"202ad4fda2618c652e35f5981ef2f903d8dd1f1a","openSSLVersion":"OpenSSL 3.0.2 15 Mar 2022","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu2204","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2023-07-11T16:45:52.261+00:00"},"s":"I",  "c":"CONTROL",  "id":51765,   "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"22.04"}}}
{"t":{"$date":"2023-07-11T16:45:52.261+00:00"},"s":"I",  "c":"CONTROL",  "id":21951,   "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"}}}}
{"t":{"$date":"2023-07-11T16:45:52.263+00:00"},"s":"W",  "c":"STORAGE",  "id":22271,   "ctx":"initandlisten","msg":"Detected unclean shutdown - Lock file is not empty","attr":{"lockFile":"/data/db/mongod.lock"}}
{"t":{"$date":"2023-07-11T16:45:52.263+00:00"},"s":"I",  "c":"STORAGE",  "id":22270,   "ctx":"initandlisten","msg":"Storage engine to use detected by data files","attr":{"dbpath":"/data/db","storageEngine":"wiredTiger"}}
{"t":{"$date":"2023-07-11T16:45:52.263+00:00"},"s":"W",  "c":"STORAGE",  "id":22302,   "ctx":"initandlisten","msg":"Recovering data from the last clean checkpoint."}
{"t":{"$date":"2023-07-11T16:45:52.263+00:00"},"s":"I",  "c":"STORAGE",  "id":22297,   "ctx":"initandlisten","msg":"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem","tags":["startupWarnings"]}
{"t":{"$date":"2023-07-11T16:45:52.263+00:00"},"s":"I",  "c":"STORAGE",  "id":22315,   "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=3455M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],"}}
{"t":{"$date":"2023-07-11T16:45:53.915+00:00"},"s":"I",  "c":"STORAGE",  "id":4795906, "ctx":"initandlisten","msg":"WiredTiger opened","attr":{"durationMillis":1652}}
{"t":{"$date":"2023-07-11T16:45:53.915+00:00"},"s":"I",  "c":"RECOVERY", "id":23987,   "ctx":"initandlisten","msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}}}
{"t":{"$date":"2023-07-11T16:45:53.993+00:00"},"s":"W",  "c":"CONTROL",  "id":22120,   "ctx":"initandlisten","msg":"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted","tags":["startupWarnings"]}
{"t":{"$date":"2023-07-11T16:45:53.993+00:00"},"s":"W",  "c":"CONTROL",  "id":5123300, "ctx":"initandlisten","msg":"vm.max_map_count is too low","attr":{"currentValue":65530,"recommendedMinimum":1677720,"maxConns":838860},"tags":["startupWarnings"]}
{"t":{"$date":"2023-07-11T16:45:54.026+00:00"},"s":"I",  "c":"NETWORK",  "id":4915702, "ctx":"initandlisten","msg":"Updated wire specification","attr":{"oldSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":17},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":17},"outgoing":{"minWireVersion":6,"maxWireVersion":17},"isInternalClient":true},"newSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":17},"incomingInternalClient":{"minWireVersion":17,"maxWireVersion":17},"outgoing":{"minWireVersion":17,"maxWireVersion":17},"isInternalClient":true}}}
{"t":{"$date":"2023-07-11T16:45:54.026+00:00"},"s":"I",  "c":"REPL",     "id":5853300, "ctx":"initandlisten","msg":"current featureCompatibilityVersion value","attr":{"featureCompatibilityVersion":"6.0","context":"startup"}}
{"t":{"$date":"2023-07-11T16:45:54.027+00:00"},"s":"I",  "c":"STORAGE",  "id":5071100, "ctx":"initandlisten","msg":"Clearing temp directory"}
{"t":{"$date":"2023-07-11T16:45:54.089+00:00"},"s":"I",  "c":"CONTROL",  "id":20536,   "ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"}
{"t":{"$date":"2023-07-11T16:45:54.090+00:00"},"s":"I",  "c":"FTDC",     "id":20625,   "ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"/data/db/diagnostic.data"}}
{"t":{"$date":"2023-07-11T16:45:54.233+00:00"},"s":"I",  "c":"REPL",     "id":6015317, "ctx":"initandlisten","msg":"Setting new configuration state","attr":{"newState":"ConfigReplicationDisabled","oldState":"ConfigPreStart"}}
{"t":{"$date":"2023-07-11T16:45:54.233+00:00"},"s":"I",  "c":"STORAGE",  "id":22262,   "ctx":"initandlisten","msg":"Timestamp monitor starting"}
{"t":{"$date":"2023-07-11T16:45:54.237+00:00"},"s":"I",  "c":"NETWORK",  "id":23015,   "ctx":"listener","msg":"Listening on","attr":{"address":"/tmp/mongodb-27017.sock"}}
{"t":{"$date":"2023-07-11T16:45:54.237+00:00"},"s":"I",  "c":"NETWORK",  "id":23015,   "ctx":"listener","msg":"Listening on","attr":{"address":"0.0.0.0"}}
{"t":{"$date":"2023-07-11T16:45:54.237+00:00"},"s":"I",  "c":"NETWORK",  "id":23016,   "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}}
{"t":{"$date":"2023-07-11T16:47:09.507+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.23.0.6:50292","uuid":"f88899bf-9164-41eb-8f9f-839498dcc135","connectionId":1,"connectionCount":1}}
{"t":{"$date":"2023-07-11T16:47:09.508+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn1","msg":"client metadata","attr":{"remote":"172.23.0.6:50292","client":"conn1","doc":{"driver":{"name":"PyMongo","version":"3.13.0"},"os":{"type":"Linux","name":"Linux","architecture":"x86_64","version":"5.19.0-1027-gcp"},"platform":"CPython 3.8.10.final.0"}}}
{"t":{"$date":"2023-07-11T16:47:09.510+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.23.0.6:50300","uuid":"aa987c70-d033-4ced-97a5-aaf8bbfd3ae8","connectionId":2,"connectionCount":2}}
{"t":{"$date":"2023-07-11T16:47:09.511+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn2","msg":"client metadata","attr":{"remote":"172.23.0.6:50300","client":"conn2","doc":{"driver":{"name":"PyMongo","version":"3.13.0"},"os":{"type":"Linux","name":"Linux","architecture":"x86_64","version":"5.19.0-1027-gcp"},"platform":"CPython 3.8.10.final.0"}}}
{"t":{"$date":"2023-07-11T16:47:09.511+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.23.0.6:50306","uuid":"28b190a1-29b6-4c21-b3b7-fbcda5e194d6","connectionId":3,"connectionCount":3}}
{"t":{"$date":"2023-07-11T16:47:09.512+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn3","msg":"client metadata","attr":{"remote":"172.23.0.6:50306","client":"conn3","doc":{"driver":{"name":"PyMongo","version":"3.13.0"},"os":{"type":"Linux","name":"Linux","architecture":"x86_64","version":"5.19.0-1027-gcp"},"platform":"CPython 3.8.10.final.0"}}}
{"t":{"$date":"2023-07-11T16:47:16.782+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.23.0.6:50322","uuid":"af5fda2e-b840-4a88-baf8-83b7d9638233","connectionId":4,"connectionCount":4}}
{"t":{"$date":"2023-07-11T16:47:16.782+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn4","msg":"client metadata","attr":{"remote":"172.23.0.6:50322","client":"conn4","doc":{"driver":{"name":"PyMongo","version":"3.13.0"},"os":{"type":"Linux","name":"Linux","architecture":"x86_64","version":"5.19.0-1027-gcp"},"platform":"CPython 3.8.10.final.0"}}}
{"t":{"$date":"2023-07-11T16:47:16.784+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.23.0.6:50330","uuid":"6bfe1fe4-b713-4554-86e4-f766493d40c2","connectionId":5,"connectionCount":5}}
{"t":{"$date":"2023-07-11T16:47:16.784+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn5","msg":"client metadata","attr":{"remote":"172.23.0.6:50330","client":"conn5","doc":{"driver":{"name":"PyMongo","version":"3.13.0"},"os":{"type":"Linux","name":"Linux","architecture":"x86_64","version":"5.19.0-1027-gcp"},"platform":"CPython 3.8.10.final.0"}}}
{"t":{"$date":"2023-07-11T16:47:16.785+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.23.0.6:50342","uuid":"35aa685e-88ce-4086-9dc1-636fd86f7125","connectionId":6,"connectionCount":6}}
{"t":{"$date":"2023-07-11T16:47:16.786+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn6","msg":"client metadata","attr":{"remote":"172.23.0.6:50342","client":"conn6","doc":{"driver":{"name":"PyMongo","version":"3.13.0"},"os":{"type":"Linux","name":"Linux","architecture":"x86_64","version":"5.19.0-1027-gcp"},"platform":"CPython 3.8.10.final.0"}}}
{"t":{"$date":"2023-07-11T16:47:47.828+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn6","msg":"Connection ended","attr":{"remote":"172.23.0.6:50342","uuid":"35aa685e-88ce-4086-9dc1-636fd86f7125","connectionId":6,"connectionCount":4}}
{"t":{"$date":"2023-07-11T16:47:47.828+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn5","msg":"Connection ended","attr":{"remote":"172.23.0.6:50330","uuid":"6bfe1fe4-b713-4554-86e4-f766493d40c2","connectionId":5,"connectionCount":5}}
{"t":{"$date":"2023-07-11T16:47:47.829+00:00"},"s":"I",  "c":"-",        "id":20883,   "ctx":"conn4","msg":"Interrupted operation as its client disconnected","attr":{"opId":1566}}
{"t":{"$date":"2023-07-11T16:47:47.829+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn4","msg":"Connection ended","attr":{"remote":"172.23.0.6:50322","uuid":"af5fda2e-b840-4a88-baf8-83b7d9638233","connectionId":4,"connectionCount":3}}
{"t":{"$date":"2023-07-11T16:47:49.302+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.23.0.6:49032","uuid":"c7d37c06-7d58-4c99-9c34-d9259df6d17c","connectionId":7,"connectionCount":4}}
{"t":{"$date":"2023-07-11T16:47:49.303+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn7","msg":"client metadata","attr":{"remote":"172.23.0.6:49032","client":"conn7","doc":{"driver":{"name":"PyMongo","version":"3.13.0"},"os":{"type":"Linux","name":"Linux","architecture":"x86_64","version":"5.19.0-1027-gcp"},"platform":"CPython 3.8.10.final.0"}}}
{"t":{"$date":"2023-07-11T16:47:49.305+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.23.0.6:49044","uuid":"25950aa2-30ba-432e-a6c6-f6d320b1a0fb","connectionId":8,"connectionCount":5}}
{"t":{"$date":"2023-07-11T16:47:49.306+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn8","msg":"client metadata","attr":{"remote":"172.23.0.6:49044","client":"conn8","doc":{"driver":{"name":"PyMongo","version":"3.13.0"},"os":{"type":"Linux","name":"Linux","architecture":"x86_64","version":"5.19.0-1027-gcp"},"platform":"CPython 3.8.10.final.0"}}}
{"t":{"$date":"2023-07-11T16:47:49.306+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.23.0.6:49050","uuid":"729571ac-b7b4-4723-81d9-a9fe441df75f","connectionId":9,"connectionCount":6}}
{"t":{"$date":"2023-07-11T16:47:49.307+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn9","msg":"client metadata","attr":{"remote":"172.23.0.6:49050","client":"conn9","doc":{"driver":{"name":"PyMongo","version":"3.13.0"},"os":{"type":"Linux","name":"Linux","architecture":"x86_64","version":"5.19.0-1027-gcp"},"platform":"CPython 3.8.10.final.0"}}}
{"t":{"$date":"2023-07-11T16:48:10.013+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn9","msg":"Connection ended","attr":{"remote":"172.23.0.6:49050","uuid":"729571ac-b7b4-4723-81d9-a9fe441df75f","connectionId":9,"connectionCount":5}}
{"t":{"$date":"2023-07-11T16:48:10.013+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn8","msg":"Connection ended","attr":{"remote":"172.23.0.6:49044","uuid":"25950aa2-30ba-432e-a6c6-f6d320b1a0fb","connectionId":8,"connectionCount":4}}
{"t":{"$date":"2023-07-11T16:48:10.013+00:00"},"s":"I",  "c":"-",        "id":20883,   "ctx":"conn7","msg":"Interrupted operation as its client disconnected","attr":{"opId":1934}}
{"t":{"$date":"2023-07-11T16:48:10.014+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn7","msg":"Connection ended","attr":{"remote":"172.23.0.6:49032","uuid":"c7d37c06-7d58-4c99-9c34-d9259df6d17c","connectionId":7,"connectionCount":3}}
{"t":{"$date":"2023-07-11T16:48:11.426+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.23.0.6:45490","uuid":"fb938d99-0a52-4c91-b961-4f1e12ddc11c","connectionId":10,"connectionCount":4}}
{"t":{"$date":"2023-07-11T16:48:11.427+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn10","msg":"client metadata","attr":{"remote":"172.23.0.6:45490","client":"conn10","doc":{"driver":{"name":"PyMongo","version":"3.13.0"},"os":{"type":"Linux","name":"Linux","architecture":"x86_64","version":"5.19.0-1027-gcp"},"platform":"CPython 3.8.10.final.0"}}}
{"t":{"$date":"2023-07-11T16:48:11.429+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.23.0.6:45496","uuid":"d13b6edf-2422-4903-a296-127294f3d4ef","connectionId":11,"connectionCount":5}}
{"t":{"$date":"2023-07-11T16:48:11.430+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn11","msg":"client metadata","attr":{"remote":"172.23.0.6:45496","client":"conn11","doc":{"driver":{"name":"PyMongo","version":"3.13.0"},"os":{"type":"Linux","name":"Linux","architecture":"x86_64","version":"5.19.0-1027-gcp"},"platform":"CPython 3.8.10.final.0"}}}
{"t":{"$date":"2023-07-11T16:48:11.434+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.23.0.6:45502","uuid":"2da84403-1a4c-472b-9176-dcc3a2cc4487","connectionId":12,"connectionCount":6}}
{"t":{"$date":"2023-07-11T16:48:11.434+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn12","msg":"client metadata","attr":{"remote":"172.23.0.6:45502","client":"conn12","doc":{"driver":{"name":"PyMongo","version":"3.13.0"},"os":{"type":"Linux","name":"Linux","architecture":"x86_64","version":"5.19.0-1027-gcp"},"platform":"CPython 3.8.10.final.0"}}}
ds2268 commented 1 year ago

Credentials used with the deploy script are 100% OK.

manthey commented 1 year ago

It uses the provided admin username and password to set a value on the girder remote worker plugin. By the time ansible is printing that message, the system should be up enough that you can visit it via web browser. Can you see the remote worker plugin? Can you change the broker setting manually using your credentials?

ds2268 commented 1 year ago

Thank you @manthey for the fast reply!

I am able to get to the Girder, but unable to log in with the old credentials. Do you have any solution? The old deployment is running fine, but we want to migrate it to the cloud and would really like to preserve all the data, users, etc. I could do that last year when I moved from HDD to SSD. The same process (with probably much newer versions of sub-components is giving me a hard time now). The alternative is to transfer the data and annotations via API, but then we lose all the metadata (time & date) and it's also cumbersome.

image
ds2268 commented 1 year ago

Looks like dsa_mongodb does not have all the data, despite moving .dsa folder and pointing the right paths in the deploy script. Note also that in the old deploy, I have quite and old MongoDB (4.4.19).

Old deploy (note 20GB of data):

mongo "mongodb://localhost:27017/girder"

image

"New" deploy:

mongosh "mongodb://localhost:27017/girder" (newer MongoDB uses "mongosh" instead of "mongo")

image

So it looks like that also MongoDB was not picked up? Or it was overwritten with a new empty workspace...no idea. This is probably why old credentials are not working on Girder...

/db is otherwise properly mounted on new dsa_mongodb /data directory and with a proper size of ~20GB.

dgutman commented 1 year ago

so as you noted, it could be the mongo database credentials may have changed. Although the girder database is there... no? Although 308KB seems kind of small..

On Tue, Jul 11, 2023 at 2:54 PM Dejan Štepec @.***> wrote:

Looks like dsa_mongodb does not have all the data, despite moving .dsa folder and pointing the right paths in the deploy script. Note also that in the old deploy, I have quite and old MongoDB (4.4.19).

Old deploy (note 20GB of data):

mongo "mongodb://localhost:27017/girder" [image: image] https://user-images.githubusercontent.com/25041003/252764345-ffa8e3bc-9931-46bf-9361-1b7471c6251a.png

"New" deploy:

mongosh "mongodb://localhost:27017/girder" (newer MongoDB uses "mongosh" instead of "mongo") [image: image] https://user-images.githubusercontent.com/25041003/252764584-2b9dbdd2-1293-4565-a50c-d49650ab3b12.png

So it looks like that also MongoDB was not picked up?

— Reply to this email directly, view it on GitHub https://github.com/DigitalSlideArchive/digital_slide_archive/issues/276#issuecomment-1631350962, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFODTWXRQYEOGJAUVTW6K3XPWOOPANCNFSM6AAAAAA2GJH22I . You are receiving this because you are subscribed to this thread.Message ID: <DigitalSlideArchive/digital_slide_archive/issues/276/1631350962@ github.com>

-- David A Gutman, M.D. Ph.D. Associate Professor of Pathology Emory University School of Medicine

ds2268 commented 1 year ago

Should be 20GB as in the old deployment. I think that it just created a default new tables in Mongo as it couldn't somehow access the old mongo /db files. No credentials were used for mongo, as default. Credentials I think are only used for Girder

dgutman commented 1 year ago

yeah--- I meant whether or not you can connect to mongo. So when you start up the docker with the docker-compose, you can try and also explicitly choose an older version of mongo, just in case something weird happened between the latest version and the one you were using.

Also, (obviously), make a copy of the your mongo directory, and run from the copy...

What is likely happening though, is in the docker-compose file, not the ./db:/data/db line... make sure that gets changed to ./.dsa:/data/db (or wherever relative from where you are running the docker container the database is living..

mongodb: image: "mongo:latest"

Set DSA_USER to your user id (e.g., DSA_USER=$(id -u):$(id -g))

# so that database files are owned by yourself.
user: ${DSA_USER:-PLEASE SET DSA_USER}
restart: unless-stopped
# Using --nojournal means that changes can be lost between the last
# checkpoint and an unexpected shutdown, but can substantially reduce
# writes.
#   Limiting maxConns reduces the amount of shared memory demanded by
# mongo.  Remove this limit or increase the host vm.max_map_count value.
command: --nojournal --maxConns 1000
volumes:
  # Location to store database files
  - ./db:/data/db

On Tue, Jul 11, 2023 at 4:39 PM Dejan Štepec @.***> wrote:

Should be 20GB as in the old deployment. I think that it just created a default new tables in Mongo as it couldn't somehow access the old mongo /db files. No credentials were used for mongo, as default. Credentials I think are only used for Girder

— Reply to this email directly, view it on GitHub https://github.com/DigitalSlideArchive/digital_slide_archive/issues/276#issuecomment-1631478276, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFODTTAEI2LJ3LPKILUGMDXPW2WLANCNFSM6AAAAAA2GJH22I . You are receiving this because you commented.Message ID: @.*** com>

-- David A Gutman, M.D. Ph.D. Associate Professor of Pathology Emory University School of Medicine

ds2268 commented 1 year ago

I have moved to docker-compose deploy instead of my old deploy-docker.py. I have moved the asset in db files to /dsa/db etc. (so that the default paths in docker-compose were OK), and made sure that DSA_USER was set to the same PID. I have also used the same version of MongoDB. I have thus no idea what else can it be :(

I didn't add a user and pass, but if I understand correctly, it's not even needed, given that there already is an admin user in the db, which is not default.

no idea.

DSA gets deployed...just fully fresh. Looks like it ignores the old MongoDB assets and creates new ones, with no errors.

20GB of old Mongo DB is mounted to /data/db but "show dbs" inside mongo shows 0GB empty tables...new ones created.

Was there any change that on new deploy new tables are created regardless of what is mounted in /db?

dgutman commented 1 year ago

The mystery deepens... so once the containers are running (docker ps), can you exec into the docker container? i.e. 'docker exec -it dsa-mongodb-1 bash' I think that's what it is called, you can confirm by running 'docker ps' and double checking the exact name given to it by the docker compose up script..

Double check your data is actually in /data/db .. i.e. there's 20 gigs of stuff there, not 300 kb. Also it's possible there's just some weird permission things so your dsa data directory is not readable by the DSA user once or some other painful permission issue... at the very least confirm that within the context of the dockercontainer, it's actually pointing to /data/db ..

On Tue, Jul 11, 2023 at 6:30 PM Dejan Štepec @.***> wrote:

I have moved to docker-compose deploy instead of my old deploy-docker.py. I have moved the asset in db files to /dsa/db etc. (so that the default paths in docker-compose were OK), and made sure that DSA_USER was set to the same PID. I have also used the same version of MongoDB. I have thus no idea what else can it be :(

I didn't add a user and pass, but if I understand correctly, it's not even needed, given that there already is an admin user in the db, which is not default.

no idea.

— Reply to this email directly, view it on GitHub https://github.com/DigitalSlideArchive/digital_slide_archive/issues/276#issuecomment-1631591502, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFODTQDV64QUMB7UOQ7EGDXPXHZZANCNFSM6AAAAAA2GJH22I . You are receiving this because you commented.Message ID: @.*** com>

-- David A Gutman, M.D. Ph.D. Associate Professor of Pathology Emory University School of Medicine

ds2268 commented 1 year ago

Yes, I am inside dsa_mongodb

I have no name!@c1b106c14368:/$ du -sh /data/db/
20G /data/db/

Shows that ./db is mounted inside dsa_mongodb...but...

then I go to MongoDB inside the same container:

mongo "mongodb://localhost:27017/girder"

and then execute "show dbs" and get:

> show dbs
admin   0.000GB
config  0.000GB
girder  0.001GB
local   0.000GB
>

So looks like new DB entries (empty) were created as the DSA is fully functioning, just fresh :) I did the exact same thing a couple of months ago and it worked with such a copy of MongoDB files. I don't want to touch now the old deployment, because probably it works due to still old images of either DSA or some dependency.

ds2268 commented 1 year ago

I have stopped the existing deployment, copied DB files to the new VM...and then restarted the old deployment and it still works (with old images that were already there). But the copied db files on the new VM with the latest images (except for MongoDB I now have exact same version - 4.4.19)...mystery :)

ds2268 commented 1 year ago

I have managed to fix it, but it involved some go-arounds.

I have used mongodump to dump the existing DBs from the working deployment that I wanted to migrate. Note that this is not efficient, as mongodump results in much larger dump (e.g., 20GB resulted in 60GB BSON files).

I have then connected to the dsa_mongodb container on the new deployment, which did not "recognize" the before copied raw MongoDB /db files and have manually restored the content of the db by using the dump using mongorestore.

I could now log in to the new DSA using old credentials. All the data (labels, users) were preserved. The slides (copied assetstore) did not work initially, but the problem was that it expected the assetstore to be mounted in /opt/digital_slide_archive/assetstore instead of /assetstore, as default in the docker-compose configuration. I have just changed that and seem like I have successfully migrated the old DB to the new deployment (my old deployment was deployed using deploy_docker.py and the default mount locations were probably different there).

I have used the same MongoDB version as in the old deployment (4.4.19). I haven't tried migrating with mongodump one could migrate to a newer MongoDB version. Probably not, as MongoDB4 and MongoDB5 should not be directly compatible. Same with 4-5-6.

This is not the fix. But gets the problem solved. I would be interested to know why the new DSA is not picking up simply copied MongoDB /db files. It just creates clean MongoDB tables.