Open ds2268 opened 1 year ago
Credentials used with the deploy script are 100% OK.
It uses the provided admin username and password to set a value on the girder remote worker plugin. By the time ansible is printing that message, the system should be up enough that you can visit it via web browser. Can you see the remote worker plugin? Can you change the broker setting manually using your credentials?
Thank you @manthey for the fast reply!
I am able to get to the Girder, but unable to log in with the old credentials. Do you have any solution? The old deployment is running fine, but we want to migrate it to the cloud and would really like to preserve all the data, users, etc. I could do that last year when I moved from HDD to SSD. The same process (with probably much newer versions of sub-components is giving me a hard time now). The alternative is to transfer the data and annotations via API, but then we lose all the metadata (time & date) and it's also cumbersome.
Looks like dsa_mongodb does not have all the data, despite moving .dsa folder and pointing the right paths in the deploy script. Note also that in the old deploy, I have quite and old MongoDB (4.4.19).
Old deploy (note 20GB of data):
mongo "mongodb://localhost:27017/girder"
"New" deploy:
mongosh "mongodb://localhost:27017/girder" (newer MongoDB uses "mongosh" instead of "mongo")
So it looks like that also MongoDB was not picked up? Or it was overwritten with a new empty workspace...no idea. This is probably why old credentials are not working on Girder...
/db is otherwise properly mounted on new dsa_mongodb /data directory and with a proper size of ~20GB.
so as you noted, it could be the mongo database credentials may have changed. Although the girder database is there... no? Although 308KB seems kind of small..
On Tue, Jul 11, 2023 at 2:54 PM Dejan Štepec @.***> wrote:
Looks like dsa_mongodb does not have all the data, despite moving .dsa folder and pointing the right paths in the deploy script. Note also that in the old deploy, I have quite and old MongoDB (4.4.19).
Old deploy (note 20GB of data):
mongo "mongodb://localhost:27017/girder" [image: image] https://user-images.githubusercontent.com/25041003/252764345-ffa8e3bc-9931-46bf-9361-1b7471c6251a.png
"New" deploy:
mongosh "mongodb://localhost:27017/girder" (newer MongoDB uses "mongosh" instead of "mongo") [image: image] https://user-images.githubusercontent.com/25041003/252764584-2b9dbdd2-1293-4565-a50c-d49650ab3b12.png
So it looks like that also MongoDB was not picked up?
— Reply to this email directly, view it on GitHub https://github.com/DigitalSlideArchive/digital_slide_archive/issues/276#issuecomment-1631350962, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFODTWXRQYEOGJAUVTW6K3XPWOOPANCNFSM6AAAAAA2GJH22I . You are receiving this because you are subscribed to this thread.Message ID: <DigitalSlideArchive/digital_slide_archive/issues/276/1631350962@ github.com>
-- David A Gutman, M.D. Ph.D. Associate Professor of Pathology Emory University School of Medicine
Should be 20GB as in the old deployment. I think that it just created a default new tables in Mongo as it couldn't somehow access the old mongo /db files. No credentials were used for mongo, as default. Credentials I think are only used for Girder
yeah--- I meant whether or not you can connect to mongo. So when you start up the docker with the docker-compose, you can try and also explicitly choose an older version of mongo, just in case something weird happened between the latest version and the one you were using.
Also, (obviously), make a copy of the your mongo directory, and run from the copy...
What is likely happening though, is in the docker-compose file, not the ./db:/data/db line... make sure that gets changed to ./.dsa:/data/db (or wherever relative from where you are running the docker container the database is living..
mongodb: image: "mongo:latest"
DSA_USER=$(id -u):$(id -g)
)# so that database files are owned by yourself.
user: ${DSA_USER:-PLEASE SET DSA_USER}
restart: unless-stopped
# Using --nojournal means that changes can be lost between the last
# checkpoint and an unexpected shutdown, but can substantially reduce
# writes.
# Limiting maxConns reduces the amount of shared memory demanded by
# mongo. Remove this limit or increase the host vm.max_map_count value.
command: --nojournal --maxConns 1000
volumes:
# Location to store database files
- ./db:/data/db
On Tue, Jul 11, 2023 at 4:39 PM Dejan Štepec @.***> wrote:
Should be 20GB as in the old deployment. I think that it just created a default new tables in Mongo as it couldn't somehow access the old mongo /db files. No credentials were used for mongo, as default. Credentials I think are only used for Girder
— Reply to this email directly, view it on GitHub https://github.com/DigitalSlideArchive/digital_slide_archive/issues/276#issuecomment-1631478276, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFODTTAEI2LJ3LPKILUGMDXPW2WLANCNFSM6AAAAAA2GJH22I . You are receiving this because you commented.Message ID: @.*** com>
-- David A Gutman, M.D. Ph.D. Associate Professor of Pathology Emory University School of Medicine
I have moved to docker-compose deploy instead of my old deploy-docker.py. I have moved the asset in db files to /dsa/db etc. (so that the default paths in docker-compose were OK), and made sure that DSA_USER was set to the same PID. I have also used the same version of MongoDB. I have thus no idea what else can it be :(
I didn't add a user and pass, but if I understand correctly, it's not even needed, given that there already is an admin user in the db, which is not default.
no idea.
DSA gets deployed...just fully fresh. Looks like it ignores the old MongoDB assets and creates new ones, with no errors.
20GB of old Mongo DB is mounted to /data/db but "show dbs" inside mongo shows 0GB empty tables...new ones created.
Was there any change that on new deploy new tables are created regardless of what is mounted in /db?
The mystery deepens... so once the containers are running (docker ps), can you exec into the docker container? i.e. 'docker exec -it dsa-mongodb-1 bash' I think that's what it is called, you can confirm by running 'docker ps' and double checking the exact name given to it by the docker compose up script..
Double check your data is actually in /data/db .. i.e. there's 20 gigs of stuff there, not 300 kb. Also it's possible there's just some weird permission things so your dsa data directory is not readable by the DSA user once or some other painful permission issue... at the very least confirm that within the context of the dockercontainer, it's actually pointing to /data/db ..
On Tue, Jul 11, 2023 at 6:30 PM Dejan Štepec @.***> wrote:
I have moved to docker-compose deploy instead of my old deploy-docker.py. I have moved the asset in db files to /dsa/db etc. (so that the default paths in docker-compose were OK), and made sure that DSA_USER was set to the same PID. I have also used the same version of MongoDB. I have thus no idea what else can it be :(
I didn't add a user and pass, but if I understand correctly, it's not even needed, given that there already is an admin user in the db, which is not default.
no idea.
— Reply to this email directly, view it on GitHub https://github.com/DigitalSlideArchive/digital_slide_archive/issues/276#issuecomment-1631591502, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFODTQDV64QUMB7UOQ7EGDXPXHZZANCNFSM6AAAAAA2GJH22I . You are receiving this because you commented.Message ID: @.*** com>
-- David A Gutman, M.D. Ph.D. Associate Professor of Pathology Emory University School of Medicine
Yes, I am inside dsa_mongodb
I have no name!@c1b106c14368:/$ du -sh /data/db/
20G /data/db/
Shows that ./db is mounted inside dsa_mongodb...but...
then I go to MongoDB inside the same container:
mongo "mongodb://localhost:27017/girder"
and then execute "show dbs" and get:
> show dbs
admin 0.000GB
config 0.000GB
girder 0.001GB
local 0.000GB
>
So looks like new DB entries (empty) were created as the DSA is fully functioning, just fresh :) I did the exact same thing a couple of months ago and it worked with such a copy of MongoDB files. I don't want to touch now the old deployment, because probably it works due to still old images of either DSA or some dependency.
I have stopped the existing deployment, copied DB files to the new VM...and then restarted the old deployment and it still works (with old images that were already there). But the copied db files on the new VM with the latest images (except for MongoDB I now have exact same version - 4.4.19)...mystery :)
I have managed to fix it, but it involved some go-arounds.
I have used mongodump
to dump the existing DBs from the working deployment that I wanted to migrate. Note that this is not efficient, as mongodump
results in much larger dump (e.g., 20GB resulted in 60GB BSON files).
I have then connected to the dsa_mongodb
container on the new deployment, which did not "recognize" the before copied raw MongoDB /db files and have manually restored the content of the db by using the dump using mongorestore
.
I could now log in to the new DSA using old credentials. All the data (labels, users) were preserved. The slides (copied assetstore) did not work initially, but the problem was that it expected the assetstore to be mounted in /opt/digital_slide_archive/assetstore
instead of /assetstore
, as default in the docker-compose configuration. I have just changed that and seem like I have successfully migrated the old DB to the new deployment (my old deployment was deployed using deploy_docker.py
and the default mount locations were probably different there).
I have used the same MongoDB version as in the old deployment (4.4.19). I haven't tried migrating with mongodump
one could migrate to a newer MongoDB version. Probably not, as MongoDB4 and MongoDB5 should not be directly compatible. Same with 4-5-6.
This is not the fix. But gets the problem solved. I would be interested to know why the new DSA is not picking up simply copied MongoDB /db files. It just creates clean MongoDB tables.
I have migrated an old DSA (1 year+) deployment to the new one by copying the .dsa directory to the new VM and using deploy.sh script:
./deploy.sh -j 4 --assetstore PATH/.dsa/assetstore/ --cache 8192 -d PATH/.dsa/db/ --logs PATH/.dsa/logs --port 9090 --user "USER" --password "PASS" --cli start
I get the following error:
Any idea?
dsa_mongodb looks fine: