medic / cht-core

The CHT Core Framework makes it faster to build responsive, offline-first digital health apps that equip health workers to provide better care in their communities. It is a central resource of the Community Health Toolkit.
https://communityhealthtoolkit.org
GNU Affero General Public License v3.0
438 stars 209 forks source link

3.x -> 4.x data migration container #7891

Closed dianabarsan closed 1 year ago

dianabarsan commented 1 year ago

Create a container that contains all the necessary scripts, exposed as commands, that edit Couchdb node and database metadata, to facilitate data migration from a 3.x instance to a 4.x instance.

The scripts should cover:

dianabarsan commented 1 year ago

I've added the code for this temporarily in a repo: https://github.com/medic/couchdb-migration I'm keeping bulk of it in a pul request to ease code review.

dianabarsan commented 1 year ago

This is ready for AT. The code is available in the repo linked above.

We have put together documentation for users to follow to achieve this migration. This can be found in this PR: https://github.com/medic/cht-docs/pull/866 Since we want both the documentation and the software to be correct and easy to use, please follow the steps in the documentation to AT this migration container.

There are a couple of test cases that should be covered:

Additionally, it would be helpful if we could assess the quality of the instructions for users that might be hosting on AWS without using medic-os (if there are such cases), who might have CouchDb data saved in some type of AWS storage volumes.

Ideally, we will improve the documentation to such quality that migrating is easy. Feedback is very welcome!

Thanks!

lorerod commented 1 year ago

Hi @dianabarsan, this is still a work in progress, but I would like to write my findings when testing this. Thank you so much for this work. It is going to be super valuable.

  1. For the scenario of "migrating from medi-os to single node 4.x" I was successful only with an online user using the chrome web app, changing the port to 443. With the two offline users I was connected using the phone to in the previous 3.x instance, I couldn't sync. It is because of the URL to connect to the instance. The ports differ from 3.x medic-os instance with port 8443 to 4.x instance with port 443. I tried changing the port to 8443 on 4.x cht-core.yml but could not make this work. For this it would be helpful to be prepare in advance.

  2. I couldn't complete the scenario of "migrating from medic-os to multi-node 4.x" successfully. I will keep working on this.

Other documentations suggestions we can discuss:

I know @ngaruko is also working on this. He may have more suggestions. cc: @andrablaj

dianabarsan commented 1 year ago

Thanks a lot for the feedback @lorerod .

It is not only a "data migration" set of instructions.

It's supposed to only cover data migration, though. The data migration does indeed require that no further changes are made in the data. What title would you suggest?

It would be interesting to have some rollback instructions in case the happy path doesn't work. To go back to your 3.x instance. It would give me more confidence. Do you think this could work?

I think the backup of data that we instruct to save should be enough of a "rollback". Do you think that would suffice?

Env variables COUCH_URL and CHT_NETWORK must be set in two places

Are you referring to the environment variables that you need to pass to CouchDb and to the migration tool?

ngaruko commented 1 year ago

@dianabarsan

  1. On the env variables, I think it would be more helpful if we add all required variables and possibly where/how to find them. For instance, if CHT_NETWORK is not set, the user get a generic error network cht-net declared as external, but could not be found. So for this one, beside mentioning that it is required (seems it is), we could also suggest ways to find it: docker network ls or otherwise.
  2. Speaking of error, we could also improve on some error messaging. I am seeing this error, for instance, and it is hard to make figure out what went wrong
    Error while getting membership
    Error when getting config FetchError: request to http://medic:password@localhost:5984/_membership failed, reason:   connect ECONNREFUSED 127.0.0.1:5984
      at ClientRequest.<anonymous> (/app/node_modules/node-fetch/lib/index.js:1461:11)
      at ClientRequest.emit (node:events:513:28)
      at Socket.socketErrorListener (node:_http_client:494:9)
      at Socket.emit (node:events:513:28)
      at emitErrorNT (node:internal/streams/destroy:157:8)
      at emitErrorCloseNT (node:internal/streams/destroy:122:3)
      at processTicksAndRejections (node:internal/process/task_queues:83:21) {
    type: 'system',
    errno: 'ECONNREFUSED',
    code: 'ECONNREFUSED'
    }
    An unexpected error occurred Error: Error when getting config
      at Object.getConfig (/app/src/utils.js:181:11)
      at processTicksAndRejections (node:internal/process/task_queues:96:5)
      at async getEnv (/app/src/get-env.js:5:18)
      at async /app/bin/get-env.js:7:5
  3. There might be other hard-coded/default variables (like that port 5984) that need to be also documented a little more
dianabarsan commented 1 year ago

Thanks for the feedback @ngaruko

For both 1 and 2:

For 1, your suggestion of checking docker network ls is good, how do you suggest we instruct further if there are multiple networks? For 2, do you think changing the example of the URL that needs to be passed would help?

lorerod commented 1 year ago

@dianabarsan

it's supposed to only cover data migration, though. The data migration does indeed require that no further changes are made in the data. What title would you suggest?

I will suggest maybe something like this: Main title: Migration from CHT 3.x to CHT 4.x Subtitle: Guide to migrate existent data from CHT 3.x to CHT 4.x Some observations before point 1: "This guide will present the required steps while using a migration helping tool, called couchdb-migration" and add "by the end of this guide you will end up with you CHT-Core 3.x instance down and CHT-Core 4.x ready to be used."

I think the backup of data that we instruct to save should be enough of a "rollback". Do you think that would suffice?

I think we can add some basics instructions of how to get your CHT 3.x up again with the backup data, or if there is some related documentations to link. Maybe in the same Some observations before point 1: we can add: "If you encounter any problems executing the instructions of this guide, you can always get your CHT 3X instance up again with the backup data. See link for further instructions." Or put the instructions in the same guide at the end. Please let me know if you think this is too much.

Are you referring to the environment variables that you need to pass to CouchDb and to the migration tool?

I´m referring to the environment variables that I need you pass to the migration tool.

dianabarsan commented 1 year ago

Thanks a lot for the feedback @lorerod . I'll include it in the docs PR.

lorerod commented 1 year ago

@dianabarsan I saw that we have a new version for couchdb-migration. Can we continue testing this? cc: @ngaruko

dianabarsan commented 1 year ago

@lorerod correct, there is a new version that fixes some interactions between the migration software and self-hosted medic-os. If you continue testing, please use the new version. @mrjones-plip volunteered to help with testing as well.,

lorerod commented 1 year ago

Environment: MacOS 13.1 (22C65); Docker desktop 4.15.0 (93002)l; Docker engine: 20.10.21 CHT 3.17: Local using docker helper script from master branch config: standard data: upload data from scalability test CHT 4.1.0: Local using docker compose files cht-core.yml and cht-couchdb.yml couchdb-migration branch: main cht-docs branch: 828-4.x-upgrade Phones:

Migrating from medic-os to single node 4.1.0 ✅ - I logged in 3.17 instance with users ac1 and ac2 on two different phones. - Installed CHT data migration tool. Provided Couchdb data: - `export CHT_NETWORK=migration_project_medic-net` - `export COUCH_URL=http://medic:password@migration_project_haproxy_1:8443/` - I got a count of the total number of documents in CHT: - Fauxton database information: ![3 17 database](https://user-images.githubusercontent.com/21312057/223754456-4dc2eae4-ab47-4422-bae1-050ff942956a.png) - Database info JSON: ``` { "db_name":"medic", "purge_seq":"0-g1AAAAFTeJzLYWBg4MhgTmEQTM4vTc5ISXIwNDLXMwBCwxygFFMeC5BkeACk_gNBViIDQbUNELXz8atNSgCSSfUEzUxSAKmzJ9LuBRC79xMw0wFkZjyRZh6AmHkfv9pEhiR5iIFZAB3kXo4", "update_seq":"12908-g1AAAAFreJzLYWBg4MhgTmEQTM4vTc5ISXIwNDLXMwBCwxygFFMiQ5L8____s5IYGNi24lGXpAAkk-xhSlPxKXUAKY2HKXXDpzQBpLQeplQZj9I8FiDJ0ACkgKrng5XbElS-AKJ8P1i5DUHlByDK74OVSxNU_gCiHOL20CwAQq5hMg", "sizes":{ "file":139605696, "external":124984426, "active":120177061 }, "other":{ "data_size":124984426 }, "doc_del_count":15, "doc_count":10545, "disk_size":139605696, "disk_format_version":7, "data_size":120177061, "compact_running":false, "cluster":{ "q":8, "n":1, "w":1, "r":1 }, "instance_start_time":"0" } ``` - I prepare CHT-Core 3.x installation for upgrading: - Backup /srv/storage/medic-core/couchdb/data in the medic-os container (⚠️ This instruction is in two places in the doc: In 2. and 4. Please make clear in the doc when it is the right moment to do the backup.) - Stop the API by getting shell on your Medic OS container and calling /boot/svc-stop medic-api: ``` Debug: Service 'medic-api/medic-api' exited with status 143 Info: Service 'medic-api/medic-api' was stopped successfully Success: Finished stopping services in package 'medic-api` ``` - Initiated view indexing by running `docker-compose run couch-migration pre-index-views 4.1.0` - Save existent CouchDb configuration running `docker-compose run couch-migration get-env` obtained ``` COUCHDB_USER=medic COUCHDB_PASSWORD=password COUCHDB_SECRET=70c9debff9c41e5a67d5fb189eef6575 COUCHDB_UUID=339a0dfa73ee09c85e08661b7923c0e9 ``` - At this point, I stopped my 3.17 medic os container (⚠️ This may be only relevant for a local setup. If not, then please clarify this in the documentation) - Launch 4.x CouchDb installation - Single node: - I downloaded 4.x single-node CouchDb docker-compose in a directory: `curl -s -o ./docker-compose.yml https://staging.dev.medicmobile.org/_couch/builds_4/medic:medic:4.1.0/docker-compose/cht-couchdb.yml` - Copied the content of the 3.17 backup data in the new directory - Set the env variables: ``` export COUCHDB_USER=medic export COUCHDB_PASSWORD=password export COUCHDB_SECRET=70c9debff9c41e5a67d5fb189eef6575 export COUCHDB_UUID=339a0dfa73ee09c85e08661b7923c0e9 export COUCHDB_DATA=/Users/marialorenarodriguezviruel/medic-workspace/couchdb-single/data ``` - Updated couchdb-migration environment variables with the 4.x information: ``` export COUCH_URL=http://medic:password@couchdb-single_couchdb_1:5984 export CHT_NETWORK=cht-net ``` - Started 4.1 couchdb with `docker-compose up -d` inside the new project directory - Checked couchdb up with `docker-compose run couch-migration check-couchdb-up` inside couchdb-migration directory: ``` Creating couchdb-migration_couch-migration_run ... done Waiting for CouchDb to be ready... CouchDb is Ready ``` -Executed `docker-compose run couch-migration move-node`: ``` Creating couchdb-migration_couch-migration_run ... done Node moved successfully ``` - Executed `docker-compose run couch-migration verify`: ``` Creating couchdb-migration_couch-migration_run ... done Verifying _global_changes Database _global_changes has passed migration checks. Verifying _replicator Database _replicator has passed migration checks. Verifying _users Database _users has passed migration checks. Verifying medic Database medic has passed migration checks. Verifying medic-audit Database medic-audit has passed migration checks. Verifying medic-logs Views of database medic-logs are not indexed. This can be caused by a migration failure or by the the views functions not indexing any documents. Verifying medic-sentinel Views of database medic-sentinel are not indexed. This can be caused by a migration failure or by the the views functions not indexing any documents. Verifying medic-user-ac1-meta Views of database medic-user-ac1-meta are not indexed. This can be caused by a migration failure or by the the views functions not indexing any documents. Verifying medic-user-ac2-meta Views of database medic-user-ac2-meta are not indexed. This can be caused by a migration failure or by the the views functions not indexing any documents. Verifying medic-user-medic-meta Views of database medic-user-medic-meta are not indexed. This can be caused by a migration failure or by the the views functions not indexing any documents. Verifying medic-users-meta Views of database medic-users-meta are not indexed. This can be caused by a migration failure or by the the views functions not indexing any documents. Migration verification passed. ``` - Started CHT-Core 4.1 using the same environment variables already set and setting also NGINX_HTTPS_PORT to match the port to the local 3.17 instance. (⚠️ This may be only relevant for a local setup. If not, then it should be an observation in the documentation.) - Once the migration is successful: ✅ offline users should not need to resync data ✅ users should not get logged out ![Screenshot_20230308-112043](https://user-images.githubusercontent.com/21312057/223739778-2490816d-b6a9-4489-8136-b9c4d29d0039.jpg) ✅ offline users should be able to resync ![Screenshot_20230308-094359](https://user-images.githubusercontent.com/21312057/223739835-19bc1aae-dbad-4fa2-8ed5-1527840b06bd.jpg) ⚠️ All data should be available in the new instance. For this I compare the database information json and the fauxton database info. I found some differences in the json, but `doc_count` is the same in both instances. - Fauxton database information: ![Captura de pantalla 2023-03-08 a la(s) 09 42 02](https://user-images.githubusercontent.com/21312057/223738013-3afb0d73-fcfe-4d5f-bf15-45dcc3bd67db.png) - Database info JSON: ``` { "db_name":"medic", "purge_seq":"0-g1AAAAFTeJzLYWBg4MhgTmEQTM4vTc5ISXIwNDLXMwBCwxygFFMeC5BkWACk_v__vz8rkQGP2iQHIJkUD1SIXx3EzAaImfMJmJkAMrOeoJlJCiB19kTafQBi931i1D6AqCVgbiJDkjxEURYAyQVejg", "update_seq":"12968-g1AAAAFreJzLYWBg4MhgTmEQTM4vTc5ISXIwNDLXMwBCwxygFFMiQ5L8____s5IYGNj24VGXpAAkk-xhSjPwKXUAKY2HKfXApzQBpLQeptQAj9I8FiDJ0ACkgKrng5W7E1S-AKJ8P1i5C0HlByDK74OVqxJU_gCiHOL2qCwAZyNhbg", "sizes":{ "file":10999512, "external":15122310, "active":10475588 }, "other":{ "data_size":15122310 }, "doc_del_count":15, "doc_count":10545, "disk_size":10999512, "disk_format_version":7, "data_size":10475588, "compact_running":false, "cluster":{ "q":8, "n":1, "w":1, "r":1 }, "instance_start_time":"0" } ```

@dianabarsan I was able to migrate successfully, but this is still a work in progress. I will continue with the migration of 3.17 medicos to 4.1 clustered. This scenario is ok. I left some comments highlighted with an ⚠️ icon. Please let me know what you think.

lorerod commented 1 year ago

Environment: Same as in previous comment with the difference in the couchdb compose file: cht-couchdb-clustered.yml

Migrating from medic-os to clustered node 4.1.0 :x: - I logged in 3.17 instance with users ac1 and ac2 on two different phones. - Installed CHT data migration tool. Provided Couchdb data: - `export CHT_NETWORK=migration_project_medic-net` - `export COUCH_URL=http://medic:password@migration_project_haproxy_1:8443/` - I prepare CHT-Core 3.x installation for upgrading: - Stop the API by getting shell on your Medic OS container and calling /boot/svc-stop medic-api: ``` Debug: Service 'medic-api/medic-api' exited with status 143 Info: Service 'medic-api/medic-api' was stopped successfully Success: Finished stopping services in package 'medic-api` ``` - Initiated view indexing by running `docker-compose run couch-migration pre-index-views 4.1.0` - Save existent CouchDb configuration running `docker-compose run couch-migration get-env` obtained ``` COUCHDB_USER=medic COUCHDB_PASSWORD=password COUCHDB_SECRET=56fd72c4c6a793516ee789624d667af1 COUCHDB_UUID=bfeec72652b854f958ee52258d6a0e0b ``` - Backup /srv/storage/medic-core/couchdb/data in the medic-os container (⚠️ This instruction is in two places in the doc: In 2. and 4. Please make clear in the doc when it is the right moment to do the backup.) - At this point, I stopped my 3.17 medic os container (⚠️ This may be only relevant for a local setup. If not, then please clarify this in the documentation) - Launch 4.x CouchDb installation - multi node: - I downloaded 4.x clustered CouchDb docker-compose in a directory: `curl -s -o ./docker-compose.yml https://staging.dev.medicmobile.org/_couch/builds_4/medic:medic:4.1.0/docker-compose/cht-couchdb-clustered.yml` - Create the data folders: ``` /couchdb/data/main /couchdb/data/secondary1 /couchdb/data/secondary2 ``` - Create a shards and a .shards directory in every secondary node folder - Copy the 3.17 backup data into the /couchdb/data/main directory - Set the env variables: ``` export COUCHDB_USER=medic export COUCHDB_PASSWORD=password export COUCHDB_SECRET=56fd72c4c6a793516ee789624d667af1 export COUCHDB_UUID=bfeec72652b854f958ee52258d6a0e0b export DB1_DATA=/Users/marialorenarodriguezviruel/medic-workspace/couchdb-cluster/couchdb-data/main export DB2_DATA=/Users/marialorenarodriguezviruel/medic-workspace/couchdb-cluster/couchdb-data/secondary1 export DB3_DATA=/Users/marialorenarodriguezviruel/medic-workspace/couchdb-cluster/couchdb-data/secondary2 ``` - Updated couchdb-migration environment variables with the 4.x information ⚠️: ``` export COUCH_URL=http://medic:password@couchdb-cluster-couchdb.1-1:5984 export CHT_NETWORK=cht-net ``` - Started 4.1 couchdb with `docker-compose up -d` inside the new project directory ``` cd ~/couchdb-cluster/ docker-compose up -d Creating couchdb-cluster_couchdb.2_1 ... done Creating couchdb-cluster_couchdb.1_1 ... done Creating couchdb-cluster_couchdb.3_1 ... done ``` - Checked couchdb up with `docker-compose run couch-migration check-couchdb-up` inside couchdb-migration directory: ``` Creating couchdb-migration_couch-migration_run ... done Waiting for CouchDb to be ready... CouchDb is Ready ``` - ⚠️ I tried using `` (in my case `couchdb.1`) in COUCH_URL as in the doc, but checking couchdb is up. I got: ``` Waiting for CouchDb to be ready... ports: - "${COUCH_PORT}:5984" - "${COUCH_CLUSTER_PORT}:5986" An unexpected error occurred Error: CouchDb is not up after 100 seconds. at checkCouchUp (/app/src/check-couch-up.js:36:11) at async /app/bin/check-couchdb-up.js:10:5 ``` - Generate the shard distribution matrix and get instructions for final shard locations. ``` shard_matrix=$(docker-compose run couch-migration generate-shard-distribution-matrix) docker-compose run couch-migration shard-move-instructions $shard_matrix Move /shards/00000000-1fffffff to /shards/00000000-1fffffff Move /.shards/00000000-1fffffff to /.shards/00000000-1fffffff Move /shards/20000000-3fffffff to /shards/20000000-3fffffff Move /.shards/20000000-3fffffff to /.shards/20000000-3fffffff Move /shards/40000000-5fffffff to /shards/40000000-5fffffff Move /.shards/40000000-5fffffff to /.shards/40000000-5fffffff Move /shards/60000000-7fffffff to /shards/60000000-7fffffff Move /.shards/60000000-7fffffff to /.shards/60000000-7fffffff Move /shards/80000000-9fffffff to /shards/80000000-9fffffff Move /.shards/80000000-9fffffff to /.shards/80000000-9fffffff Move /shards/a0000000-bfffffff to /shards/a0000000-bfffffff Move /.shards/a0000000-bfffffff to /.shards/a0000000-bfffffff Move /shards/c0000000-dfffffff to /shards/c0000000-dfffffff Move /.shards/c0000000-dfffffff to /.shards/c0000000-dfffffff Move /shards/e0000000-ffffffff to /shards/e0000000-ffffffff Move /.shards/e0000000-ffffffff to /.shards/e0000000-ffffffff ``` - Manually move the shard files to the correct location - Change metadata to match the new shard distribution ``` docker-compose run couch-migration move-shards $shard_matrix Shards moved successfully ``` - docker-compose run couch-migration verify ``` ... Migration verification passed. ``` - Started CHT-Core 4.1 using `curl -s -o ./cht-core.yml https://staging.dev.medicmobile.org/_couch/builds_4/medic:medic:4.1.0/docker-compose/cht-core.yml` and `COUCHDB_SERVERS=couchdb.1,couchdb.2,couchdb.3 docker-compose up` and I got this errors on the log: ``` api_1 | Error: Cluster not ready api_1 | at checkCluster (/shared-libs/server-checks/src/checks.js:94:11) api_1 | at processTicksAndRejections (node:internal/process/task_queues:96:5) api_1 | at async couchDbCheck (/shared-libs/server-checks/src/checks.js:135:7) api_1 | at async /api/server.js:23:5 haproxy_1 | <150>Mar 9 18:12:11 haproxy[25]: 172.20.0.8,,503,0,0,0,GET,/,-,medic,'-',222,-1,-,'-' sentinel_1 | StatusCodeError: 503 - "

503 Service Unavailable

\nNo server is available to handle this request.\n\n" sentinel_1 | at new StatusCodeError (/sentinel/node_modules/request-promise-core/lib/errors.js:32:15) sentinel_1 | at Request.plumbing.callback (/sentinel/node_modules/request-promise-core/lib/plumbing.js:104:33) sentinel_1 | at Request.RP$callback [as _callback] (/sentinel/node_modules/request-promise- core/lib/plumbing.js:46:31) sentinel_1 | at Request.self.callback (/sentinel/node_modules/request/request.js:185:22) sentinel_1 | at Request.emit (node:events:513:28) sentinel_1 | at Request. (/sentinel/node_modules/request/request.js:1154:10) sentinel_1 | at Request.emit (node:events:513:28) sentinel_1 | at IncomingMessage. (/sentinel/node_modules/request/request.js:1076:12) sentinel_1 | at Object.onceWrapper (node:events:627:28) sentinel_1 | at IncomingMessage.emit (node:events:525:35) { sentinel_1 | statusCode: 503, sentinel_1 | error: '

503 Service Unavailable

\n' + sentinel_1 | 'No server is available to handle this request.\n' + sentinel_1 | '\n' sentinel_1 | } haproxy_1 | <150>Mar 9 18:12:12 haproxy[25]: 172.20.0.7,,503,0,0,0,GET,/,-,medic,'-',222,-1,-,'-' api_1 | StatusCodeError: 503 - "

503 Service Unavailable

\nNo server is available to handle this request.\n\n" ``` - The instance is not working.

@dianabarsan I wasn't able to migrate successfully. Am I missing something? @ngaruko Did you try this also? Can you share your experience?

dianabarsan commented 1 year ago

Hi @lorerod

Thanks a lot for the extensive detail that you've provided! This is of great value!

I tried using (in my case couchdb.1) in COUCH_URL as in the doc, but checking couchdb is up. I got:

It seems you have unlocked yourself, what URL did you end up using?

Started CHT-Core 4.1 using curl -s -o ./docker-compose.yml https://staging.dev.medicmobile.org/_couch/builds_4/medic:medic:4.1.0/docker-compose/cht-core.yml and COUCHDB_SERVERS=couchdb.1,couchdb.2,couchdb.3 docker-compose up and I got this errors on the log:

Can you please try:

My guess is that there is some docker network mismatch and haproxy can't reach the pre-existent CouchDb. Please make sure that you're using the same environment variables when you start 4.1 CouchDb along with the other services.

lorerod commented 1 year ago

It seems you have unlocked yourself, what URL did you end up using?

@dianabarsan I end up using COUCH_URL=http://medic:password@couchdb-cluster-couchdb.1-1:5984

lorerod commented 1 year ago

Can you please try:

  • stopping the previous CouchDb build, that you used for the migration
  • start CHT 4.1 using both docker-compose files

@dianabarsan I stopped the previous CouchDb build, and start CHT4.1 using: COUCHDB_SERVERS=couchdb.1,couchdb.2,couchdb.3 docker-compose -f cht-couchdb-clustered.yml -f cht-core.yml up -d The result is not successful. I need two pairs of fresh eyes for this. Running out of ideas :) cc: @andrablaj

api log ``` Info: Starting CHT API 2023-03-22 12:25:19 INFO: Running server checks… Node Environment Options: 'undefined' Node Version: 16.17.1 in development mode CouchDB Version: 2.3.1 CouchDB Version: 2.3.1 CouchDB Version: 2.3.1 CouchDB Version: 2.3.1 CouchDB Version: 2.3.1 Error: Cluster not ready at checkCluster (/shared-libs/server-checks/src/checks.js:94:11) at processTicksAndRejections (node:internal/process/task_queues:96:5) at async couchDbCheck (/shared-libs/server-checks/src/checks.js:135:7) at async /api/server.js:23:5 Error: Cluster not ready at checkCluster (/shared-libs/server-checks/src/checks.js:94:11) at processTicksAndRejections (node:internal/process/task_queues:96:5) at async couchDbCheck (/shared-libs/server-checks/src/checks.js:135:7) at async /api/server.js:23:5 Error: Cluster not ready at checkCluster (/shared-libs/server-checks/src/checks.js:94:11) at processTicksAndRejections (node:internal/process/task_queues:96:5) at async couchDbCheck (/shared-libs/server-checks/src/checks.js:135:7) at async /api/server.js:23:5 Error: Cluster not ready at checkCluster (/shared-libs/server-checks/src/checks.js:94:11) at processTicksAndRejections (node:internal/process/task_queues:96:5) at async couchDbCheck (/shared-libs/server-checks/src/checks.js:135:7) at async /api/server.js:23:5 Error: Cluster not ready at checkCluster (/shared-libs/server-checks/src/checks.js:94:11) at processTicksAndRejections (node:internal/process/task_queues:96:5) at async couchDbCheck (/shared-libs/server-checks/src/checks.js:135:7) at async /api/server.js:23:5 CouchDB Version: 2.3.1 CouchDB Version: 2.3.1 Error: Cluster not ready at checkCluster (/shared-libs/server-checks/src/checks.js:94:11) at processTicksAndRejections (node:internal/process/task_queues:96:5) at async couchDbCheck (/shared-libs/server-checks/src/checks.js:135:7) at async /api/server.js:23:5 Error: Cluster not ready at checkCluster (/shared-libs/server-checks/src/checks.js:94:11) at processTicksAndRejections (node:internal/process/task_queues:96:5) at async couchDbCheck (/shared-libs/server-checks/src/checks.js:135:7) at async /api/server.js:23:5 StatusCodeError: 503 - "

503 Service Unavailable

\nNo server is available to handle this request.\n\n" at new StatusCodeError (/api/node_modules/request-promise-core/lib/errors.js:32:15) at Request.plumbing.callback (/api/node_modules/request-promise-core/lib/plumbing.js:104:33) at Request.RP$callback [as _callback] (/api/node_modules/request-promise-core/lib/plumbing.js:46:31) at Request.self.callback (/api/node_modules/request/request.js:185:22) at Request.emit (node:events:513:28) at Request. (/api/node_modules/request/request.js:1154:10) at Request.emit (node:events:513:28) at IncomingMessage. (/api/node_modules/request/request.js:1076:12) at Object.onceWrapper (node:events:627:28) at IncomingMessage.emit (node:events:525:35) { statusCode: 503, error: '

503 Service Unavailable

\n' + 'No server is available to handle this request.\n' + '\n' } ```
haproxy log ``` Starting enhanced syslogd: rsyslogd. # Setting `log` here with the address of 127.0.0.1 will have the effect # of haproxy sending the udp log messages to its own rsyslog instance # (which sits at `127.0.0.1`) at the `local0` facility including all # logs that have a priority greater or equal to the specified log level # log 127.0.0.1 local0 warning global maxconn 150000 spread-checks 5 lua-load /usr/local/etc/haproxy/parse_basic.lua lua-load /usr/local/etc/haproxy/parse_cookie.lua lua-load /usr/local/etc/haproxy/replace_password.lua log stdout len 65535 local2 debug tune.bufsize 1638400 tune.http.maxhdr 1010 # https://www.haproxy.com/documentation/hapee/latest/onepage/#3.2-tune.bufsize # At least the global maxconn # parameter should be decreased by the same factor as this one is increased. If an # HTTP request is larger than (tune.bufsize - tune.maxrewrite), HAProxy will # return HTTP 400 (Bad Request) error. Similarly if an HTTP response is larger # than this size, HAProxy will return HTTP 502 (Bad Gateway). # https://www.haproxy.com/documentation/hapee/latest/onepage/#3.2-tune.http.maxhdr # Similarly, too large responses # are blocked with "502 Bad Gateway". defaults mode http option http-ignore-probes option httplog option forwardfor option redispatch option http-server-close timeout client 15000000 timeout server 360000000 timeout connect 1500000 timeout http-keep-alive 5m stats enable stats refresh 30s stats auth medic:password stats uri /haproxy?stats frontend http-in bind haproxy:5984 acl has_user req.hdr(x-medic-user) -m found acl has_cookie req.hdr(cookie) -m found acl has_basic_auth req.hdr(authorization) -m found declare capture request len 400000 http-request set-header x-medic-user %[lua.parseBasic] if has_basic_auth http-request set-header x-medic-user %[lua.parseCookie] if !has_basic_auth !has_user has_cookie http-request capture req.body id 0 # capture.req.hdr(0) http-request capture req.hdr(x-medic-service) len 200 # capture.req.hdr(1) http-request capture req.hdr(x-medic-user) len 200 # capture.req.hdr(2) http-request capture req.hdr(user-agent) len 600 # capture.req.hdr(3) capture response header Content-Length len 10 # capture.res.hdr(0) http-response set-header Connection Keep-Alive http-response set-header Keep-Alive timeout=18000 log global log-format "%ci,%s,%ST,%Ta,%Ti,%TR,%[capture.req.method],%[capture.req.uri],%[capture.req.hdr(1)],%[capture.req.hdr(2)],'%[capture.req.hdr(0),lua.replacePassword]',%B,%Tr,%[capture.res.hdr(0)],'%[capture.req.hdr(3)]'" default_backend couchdb-servers backend couchdb-servers balance leastconn retry-on all-retryable-errors log global retries 5 # servers are added at runtime, in entrypoint.sh, based on couchdb.1,couchdb.2,couchdb.3 server couchdb.1 couchdb.1:5984 check agent-check agent-inter 5s agent-addr healthcheck agent-port 5555 server couchdb.2 couchdb.2:5984 check agent-check agent-inter 5s agent-addr healthcheck agent-port 5555 server couchdb.3 couchdb.3:5984 check agent-check agent-inter 5s agent-addr healthcheck agent-port 5555 <150>Mar 22 12:25:19 haproxy[25]: 172.22.0.8,couchdb.1,200,3,0,0,GET,/,-,medic,'-',436,1,208,'-' <150>Mar 22 12:25:19 haproxy[25]: 172.22.0.8,couchdb.2,401,2,0,0,GET,/,-,-,'-',353,0,61,'-' <150>Mar 22 12:25:19 haproxy[25]: 172.22.0.8,couchdb.3,200,3,0,0,GET,/_membership,-,medic,'-',354,2,174,'-' <150>Mar 22 12:25:19 haproxy[25]: 172.22.0.8,couchdb.1,200,2,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:19 haproxy[25]: 172.22.0.8,couchdb.2,200,2,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:19 haproxy[25]: 172.22.0.7,couchdb.3,200,3,6,0,GET,/,-,medic,'-',436,1,208,'-' <150>Mar 22 12:25:19 haproxy[25]: 172.22.0.7,couchdb.1,401,1,0,0,GET,/,-,-,'-',353,1,61,'-' <150>Mar 22 12:25:19 haproxy[25]: 172.22.0.7,couchdb.2,200,1,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:19 haproxy[25]: 172.22.0.7,couchdb.3,200,2,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:19 haproxy[25]: 172.22.0.7,couchdb.1,200,2,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:20 haproxy[25]: 172.22.0.8,couchdb.2,200,2,0,0,GET,/,-,medic,'-',436,1,208,'-' <150>Mar 22 12:25:20 haproxy[25]: 172.22.0.8,couchdb.3,401,1,0,0,GET,/,-,-,'-',353,1,61,'-' <150>Mar 22 12:25:20 haproxy[25]: 172.22.0.8,couchdb.1,200,2,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:20 haproxy[25]: 172.22.0.8,couchdb.2,200,2,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:20 haproxy[25]: 172.22.0.8,couchdb.3,200,1,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:20 haproxy[25]: 172.22.0.7,couchdb.1,200,1,0,0,GET,/,-,medic,'-',436,1,208,'-' <150>Mar 22 12:25:20 haproxy[25]: 172.22.0.7,couchdb.2,401,1,0,0,GET,/,-,-,'-',353,1,61,'-' <150>Mar 22 12:25:20 haproxy[25]: 172.22.0.7,couchdb.3,200,2,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:20 haproxy[25]: 172.22.0.7,couchdb.1,200,2,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:20 haproxy[25]: 172.22.0.7,couchdb.2,200,1,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:21 haproxy[25]: 172.22.0.8,couchdb.3,200,1,0,0,GET,/,-,medic,'-',436,1,208,'-' <150>Mar 22 12:25:21 haproxy[25]: 172.22.0.8,couchdb.1,401,42,0,0,GET,/,-,-,'-',353,41,61,'-' <150>Mar 22 12:25:21 haproxy[25]: 172.22.0.8,couchdb.2,200,4,0,0,GET,/_membership,-,medic,'-',354,2,174,'-' <150>Mar 22 12:25:21 haproxy[25]: 172.22.0.8,couchdb.3,200,19,0,0,GET,/_membership,-,medic,'-',354,14,174,'-' <150>Mar 22 12:25:21 haproxy[25]: 172.22.0.7,couchdb.1,200,11,0,0,GET,/,-,medic,'-',436,11,208,'-' <150>Mar 22 12:25:21 haproxy[25]: 172.22.0.8,couchdb.2,200,15,0,0,GET,/_membership,-,medic,'-',354,11,174,'-' <150>Mar 22 12:25:21 haproxy[25]: 172.22.0.7,couchdb.3,401,16,1,0,GET,/,-,-,'-',353,3,61,'-' <150>Mar 22 12:25:22 haproxy[25]: 172.22.0.7,couchdb.1,200,1,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:22 haproxy[25]: 172.22.0.7,couchdb.2,200,2,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:22 haproxy[25]: 172.22.0.7,couchdb.3,200,3,0,0,GET,/_membership,-,medic,'-',354,2,174,'-' <150>Mar 22 12:25:22 haproxy[25]: 172.22.0.8,couchdb.1,200,5,0,0,GET,/,-,medic,'-',436,1,208,'-' <150>Mar 22 12:25:22 haproxy[25]: 172.22.0.8,couchdb.2,401,1,0,0,GET,/,-,-,'-',353,1,61,'-' <150>Mar 22 12:25:22 haproxy[25]: 172.22.0.8,couchdb.3,200,1,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:22 haproxy[25]: 172.22.0.8,couchdb.1,200,2,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:22 haproxy[25]: 172.22.0.8,couchdb.2,200,3,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <145>Mar 22 12:25:23 haproxy[25]: Server couchdb-servers/couchdb.3 is DOWN, reason: Layer7 wrong status, code: 0, info: "via agent : down", check duration: 219ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue. <150>Mar 22 12:25:23 haproxy[25]: 172.22.0.7,couchdb.1,200,1,0,0,GET,/,-,medic,'-',436,1,208,'-' <150>Mar 22 12:25:23 haproxy[25]: 172.22.0.7,couchdb.2,401,1,0,0,GET,/,-,-,'-',353,1,61,'-' <150>Mar 22 12:25:23 haproxy[25]: 172.22.0.7,couchdb.1,200,2,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:23 haproxy[25]: 172.22.0.7,couchdb.2,200,2,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:23 haproxy[25]: 172.22.0.7,couchdb.1,200,1,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:24 haproxy[25]: 172.22.0.8,couchdb.2,200,2,0,0,GET,/,-,medic,'-',436,1,208,'-' <150>Mar 22 12:25:24 haproxy[25]: 172.22.0.8,couchdb.1,401,3,0,0,GET,/,-,-,'-',353,1,61,'-' <150>Mar 22 12:25:24 haproxy[25]: 172.22.0.8,couchdb.2,200,3,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:24 haproxy[25]: 172.22.0.8,couchdb.1,200,2,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:24 haproxy[25]: 172.22.0.8,couchdb.2,200,1,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:24 haproxy[25]: 172.22.0.7,couchdb.1,200,4,0,0,GET,/,-,medic,'-',436,1,208,'-' <150>Mar 22 12:25:24 haproxy[25]: 172.22.0.7,couchdb.2,401,3,0,0,GET,/,-,-,'-',353,1,61,'-' <150>Mar 22 12:25:24 haproxy[25]: 172.22.0.7,couchdb.1,200,1,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:24 haproxy[25]: 172.22.0.7,couchdb.2,200,1,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:24 haproxy[25]: 172.22.0.7,couchdb.1,200,2,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <145>Mar 22 12:25:24 haproxy[25]: Server couchdb-servers/couchdb.1 is DOWN, reason: Layer7 wrong status, code: 0, info: "via agent : down", check duration: 212ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue. <150>Mar 22 12:25:25 haproxy[25]: 172.22.0.8,couchdb.2,200,5,0,0,GET,/,-,medic,'-',436,1,208,'-' <150>Mar 22 12:25:25 haproxy[25]: 172.22.0.8,couchdb.2,401,1,0,0,GET,/,-,-,'-',353,1,61,'-' <150>Mar 22 12:25:25 haproxy[25]: 172.22.0.8,couchdb.2,200,1,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:25 haproxy[25]: 172.22.0.8,couchdb.2,200,2,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:25 haproxy[25]: 172.22.0.8,couchdb.2,200,2,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:25 haproxy[25]: 172.22.0.7,couchdb.2,200,2,0,0,GET,/,-,medic,'-',436,1,208,'-' <150>Mar 22 12:25:25 haproxy[25]: 172.22.0.7,couchdb.2,401,1,0,0,GET,/,-,-,'-',353,1,61,'-' <150>Mar 22 12:25:25 haproxy[25]: 172.22.0.7,couchdb.2,200,3,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:25 haproxy[25]: 172.22.0.7,couchdb.2,200,2,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:25 haproxy[25]: 172.22.0.7,couchdb.2,200,1,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:26 haproxy[25]: 172.22.0.8,couchdb.2,200,2,0,0,GET,/,-,medic,'-',436,1,208,'-' <150>Mar 22 12:25:26 haproxy[25]: 172.22.0.8,couchdb.2,401,3,0,0,GET,/,-,-,'-',353,1,61,'-' <150>Mar 22 12:25:26 haproxy[25]: 172.22.0.8,couchdb.2,200,4,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:26 haproxy[25]: 172.22.0.8,couchdb.2,200,1,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:26 haproxy[25]: 172.22.0.8,couchdb.2,200,1,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:26 haproxy[25]: 172.22.0.7,couchdb.2,200,4,0,0,GET,/,-,medic,'-',436,2,208,'-' <150>Mar 22 12:25:26 haproxy[25]: 172.22.0.7,couchdb.2,401,4,0,0,GET,/,-,-,'-',353,1,61,'-' <150>Mar 22 12:25:26 haproxy[25]: 172.22.0.7,couchdb.2,200,1,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:26 haproxy[25]: 172.22.0.7,couchdb.2,200,1,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <150>Mar 22 12:25:26 haproxy[25]: 172.22.0.7,couchdb.2,200,2,0,0,GET,/_membership,-,medic,'-',354,1,174,'-' <145>Mar 22 12:25:26 haproxy[25]: Server couchdb-servers/couchdb.2 is DOWN, reason: Layer7 wrong status, code: 0, info: "via agent : down", check duration: 217ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue. <144>Mar 22 12:25:26 haproxy[25]: backend couchdb-servers has no server available! <150>Mar 22 12:25:27 haproxy[25]: 172.22.0.8,,503,0,0,0,GET,/,-,medic,'-',222,-1,-,'-' [alert] 080/122518 (1) : parseBasic loaded [alert] 080/122518 (1) : parseCookie loaded [alert] 080/122518 (1) : replacePassword loaded [NOTICE] 080/122518 (1) : New worker #1 (25) forked [WARNING] 080/122523 (25) : Server couchdb-servers/couchdb.3 is DOWN, reason: Layer7 wrong status, code: 0, info: "via agent : down", check duration: 219ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue. [WARNING] 080/122524 (25) : Server couchdb-servers/couchdb.1 is DOWN, reason: Layer7 wrong status, code: 0, info: "via agent : down", check duration: 212ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue. [WARNING] 080/122526 (25) : Server couchdb-servers/couchdb.2 is DOWN, reason: Layer7 wrong status, code: 0, info: "via agent : down", check duration: 217ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue. [NOTICE] 080/122526 (25) : haproxy version is 2.3.19-0647791 [NOTICE] 080/122526 (25) : path to executable is /usr/local/sbin/haproxy [ALERT] 080/122526 (25) : backend 'couchdb-servers' has no server available! <150>Mar 22 12:25:27 haproxy[25]: 172.22.0.7,,503,0,0,0,GET,/,-,medic,'-',222,-1,-,'-' ```
couchdb.1 log ``` -name couchdb@127.0.0.1 {"error":"bad_request","reason":"Cluster is already enabled"} Waiting for cht couchdb couchdb is ready jq: error: syntax error, unexpected '=' (Unix shell quoting issues?) at , line 1: .all_nodes == .cluster_nodes and (.all_nodes | length) === 3 jq: 1 compile error (23) Failed writing body % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 178 100 62 100 116 3100 5800 --:--:-- --:--:-- --:--:-- 8900 couchdb is ready % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 243 100 12 100 231 292 5634 --:--:-- --:--:-- --:--:-- 5926 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed {"ok":true} {"ok":true} {"ok":true} {"ok":true} {"error":"setup_error","reason":"Cluster setup timed out waiting for nodes to connect"} {"all_nodes":["couchdb@couchdb.1","couchdb@couchdb.2","couchdb@couchdb.3"],"cluster_nodes":["couchdb@127.0.0.1","couchdb@couchdb.1","couchdb@couchdb.2","couchdb@couchdb.3"]} 100 111 100 12 100 99 750 6187 --:--:-- --:--:-- --:--:-- 6937 couchdb is ready % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 243 100 12 100 231 500 9625 --:--:-- --:--:-- --:--:-- 10125 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 111 100 12 100 99 461 3807 --:--:-- --:--:-- --:--:-- 4269 jq: error: syntax error, unexpected '=' (Unix shell quoting issues?) at , line 1: .all_nodes == .cluster_nodes and (.all_nodes | length) === 3 jq: 1 compile error (23) Failed writing body % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 116 100 88 100 28 17 5 0:00:05 0:00:05 --:--:-- 18 [error] 2023-03-22T12:20:17.832604Z couchdb@couchdb.1 <0.264.0> -------- Could not get design docs for <<"shards/00000000-1fffffff/_global_changes.1679415778">> error:{error,{nodedown,<<"progress not possible">>}} [error] 2023-03-22T12:20:20.859225Z couchdb@couchdb.1 <0.264.0> -------- Could not get design docs for <<"shards/00000000-1fffffff/_replicator.1679415778">> error:{error,{nodedown,<<"progress not possible">>}} [error] 2023-03-22T12:20:23.878236Z couchdb@couchdb.1 <0.264.0> -------- Could not get design docs for <<"shards/00000000-1fffffff/_users.1679415778">> error:{error,{nodedown,<<"progress not possible">>}} [error] 2023-03-22T12:20:26.904539Z couchdb@couchdb.1 <0.264.0> -------- Could not get design docs for <<"shards/00000000-1fffffff/medic-audit.1679415849">> error:{error,{nodedown,<<"progress not possible">>}} [error] 2023-03-22T12:20:29.933344Z couchdb@couchdb.1 <0.264.0> -------- Could not get design docs for <<"shards/00000000-1fffffff/medic-logs.1679416225">> error:{error,{nodedown,<<"progress not possible">>}} [error] 2023-03-22T12:20:32.964953Z couchdb@couchdb.1 <0.264.0> -------- Could not get design docs for <<"shards/00000000-1fffffff/medic-sentinel.1679415846">> error:{error,{nodedown,<<"progress not possible">>}} [error] 2023-03-22T12:20:35.981701Z couchdb@couchdb.1 <0.264.0> -------- Could not get design docs for <<"shards/00000000-1fffffff/medic-user-ac1-meta.1679417161">> error:{error,{nodedown,<<"progress not possible">>}} [error] 2023-03-22T12:20:39.009472Z couchdb@couchdb.1 <0.264.0> -------- Could not get design docs for <<"shards/00000000-1fffffff/medic-user-ac2-meta.1679417759">> error:{error,{nodedown,<<"progress not possible">>}} [error] 2023-03-22T12:20:42.038168Z couchdb@couchdb.1 <0.264.0> -------- Could not get design docs for <<"shards/00000000-1fffffff/medic-user-medic-meta.1679416130">> error:{error,{nodedown,<<"progress not possible">>}} [error] 2023-03-22T12:20:45.069390Z couchdb@couchdb.1 <0.264.0> -------- Could not get design docs for <<"shards/00000000-1fffffff/medic-users-meta.1679415846">> error:{error,{nodedown,<<"progress not possible">>}} [error] 2023-03-22T12:20:48.090941Z couchdb@couchdb.1 <0.264.0> -------- Could not get design docs for <<"shards/00000000-1fffffff/medic.1679415780">> error:{error,{nodedown,<<"progress not possible">>}} [error] 2023-03-22T12:20:51.125101Z couchdb@couchdb.1 <0.264.0> -------- Could not get design docs for <<"shards/60000000-7fffffff/_global_changes.1679415778">> error:{error,{nodedown,<<"progress not possible">>}} [error] 2023-03-22T12:20:54.151813Z couchdb@couchdb.1 <0.264.0> -------- Could not get design docs for <<"shards/60000000-7fffffff/_replicator.1679415778">> error:{error,{nodedown,<<"progress not possible">>}} [error] 2023-03-22T12:20:57.177236Z couchdb@couchdb.1 <0.264.0> -------- Could not get design docs for <<"shards/60000000-7fffffff/_users.1679415778">> error:{error,{nodedown,<<"progress not possible">>}} [error] 2023-03-22T12:21:00.212913Z couchdb@couchdb.1 <0.264.0> -------- Could not get design docs for <<"shards/60000000-7fffffff/medic-audit.1679415849">> error:{error,{nodedown,<<"progress not possible">>}} [error] 2023-03-22T12:21:03.237618Z couchdb@couchdb.1 <0.264.0> -------- Could not get design docs for <<"shards/60000000-7fffffff/medic-logs.1679416225">> error:{error,{nodedown,<<"progress not possible">>}} [error] 2023-03-22T12:21:06.268839Z couchdb@couchdb.1 <0.264.0> -------- Could not get design docs for <<"shards/60000000-7fffffff/medic-sentinel.1679415846">> error:{error,{nodedown,<<"progress not possible">>}} [error] 2023-03-22T12:21:09.294992Z couchdb@couchdb.1 <0.264.0> -------- Could not get design docs for <<"shards/60000000-7fffffff/medic-user-ac1-meta.1679417161">> error:{error,{nodedown,<<"progress not possible">>}} [error] 2023-03-22T12:21:12.318905Z couchdb@couchdb.1 <0.264.0> -------- Could not get design docs for <<"shards/60000000-7fffffff/medic-user-ac2-meta.1679417759">> error:{error,{nodedown,<<"progress not possible">>}} [error] 2023-03-22T12:21:15.343427Z couchdb@couchdb.1 <0.264.0> -------- Could not get design docs for <<"shards/60000000-7fffffff/medic-user-medic-meta.1679416130">> error:{error,{nodedown,<<"progress not possible">>}} [error] 2023-03-22T12:21:18.360801Z couchdb@couchdb.1 <0.2689.0> -------- could not load validation funs {{badmatch,{error,{nodedown,<<"progress not possible">>}}},[{ddoc_cache_entry_validation_funs,recover,1,[{file,"src/ddoc_cache_entry_validation_funs.erl"},{line,33}]},{ddoc_cache_entry,do_open,1,[{file,"src/ddoc_cache_entry.erl"},{line,297}]}]} [error] 2023-03-22T12:21:18.361738Z couchdb@couchdb.1 emulator -------- Error in process <0.2691.0> on node 'couchdb@couchdb.1' with exit value: {{badmatch,{error,{nodedown,<<"progress not possible">>}}},[{ddoc_cache_entry_validation_funs,recover,1,[{file,"src/ddoc_cache_entry_validation_funs.erl"},{line,33}]},{ddoc_cache_entry,do_open,1,[{file,"src/ddoc_cache_entry.erl"},{line,297}]}]} [error] 2023-03-22T12:21:18.371844Z couchdb@couchdb.1 <0.264.0> -------- Could not get design docs for <<"shards/60000000-7fffffff/medic-users-meta.1679415846">> error:{error,{nodedown,<<"progress not possible">>}} [error] 2023-03-22T12:21:18.795988Z couchdb@couchdb.1 <0.2788.0> -------- could not load validation funs {{badmatch,{error,{nodedown,<<"progress not possible">>}}},[{ddoc_cache_entry_validation_funs,recover,1,[{file,"src/ddoc_cache_entry_validation_funs.erl"},{line,33}]},{ddoc_cache_lru,lru_start,2,[{file,"src/ddoc_cache_lru.erl"},{line,246}]},{couch_db,'-load_validation_funs/1-fun-0-',1,[{file,"src/couch_db.erl"},{line,887}]}]} [error] 2023-03-22T12:21:18.797274Z couchdb@couchdb.1 emulator -------- Error in process <0.2789.0> on node 'couchdb@couchdb.1' with exit value: {{badmatch,{error,{nodedown,<<"progress not possible">>}}},[{ddoc_cache_entry_validation_funs,recover,1,[{file,"src/ddoc_cache_entry_validation_funs.erl"},{line,33}]},{ddoc_cache_lru,lru_start,2,[{file,"src/ddoc_cache_lru.erl"},{line,246}]},{couch_db,'-load_validation_funs/1-fun-0-',1,[{file,"src/couch_db.erl"},{line,887}]}]} [error] 2023-03-22T12:21:19.296607Z couchdb@couchdb.1 <0.2862.0> -------- could not load validation funs {{badmatch,{error,{nodedown,<<"progress not possible">>}}},[{ddoc_cache_entry_validation_funs,recover,1,[{file,"src/ddoc_cache_entry_validation_funs.erl"},{line,33}]},{ddoc_cache_entry,do_open,1,[{file,"src/ddoc_cache_entry.erl"},{line,297}]}]} [error] 2023-03-22T12:21:19.297194Z couchdb@couchdb.1 emulator -------- Error in process <0.2863.0> on node 'couchdb@couchdb.1' with exit value: {{badmatch,{error,{nodedown,<<"progress not possible">>}}},[{ddoc_cache_entry_validation_funs,recover,1,[{file,"src/ddoc_cache_entry_validation_funs.erl"},{line,33}]},{ddoc_cache_entry,do_open,1,[{file,"src/ddoc_cache_entry.erl"},{line,297}]}]} [error] 2023-03-22T12:21:19.797454Z couchdb@couchdb.1 emulator -------- Error in process <0.2942.0> on node 'couchdb@couchdb.1' with exit value: {{badmatch,{error,{nodedown,<<"progress not possible">>}}},[{ddoc_cache_entry_validation_funs,recover,1,[{file,"src/ddoc_cache_entry_validation_funs.erl"},{line,33}]},{ddoc_cache_entry,do_open,1,[{file,"src/ddoc_cache_entry.erl"},{line,297}]}]} [error] 2023-03-22T12:21:19.797614Z couchdb@couchdb.1 <0.2940.0> -------- could not load validation funs {{badmatch,{error,{nodedown,<<"progress not possible">>}}},[{ddoc_cache_entry_validation_funs,recover,1,[{file,"src/ddoc_cache_entry_validation_funs.erl"},{line,33}]},{ddoc_cache_entry,do_open,1,[{file,"src/ddoc_cache_entry.erl"},{line,297}]}]} [error] 2023-03-22T12:21:20.293337Z couchdb@couchdb.1 <0.3034.0> -------- could not load validation funs {{badmatch,{error,{nodedown,<<"progress not possible">>}}},[{ddoc_cache_entry_validation_funs,recover,1,[{file,"src/ddoc_cache_entry_validation_funs.erl"},{line,33}]},{ddoc_cache_entry,do_open,1,[{file,"src/ddoc_cache_entry.erl"},{line,297}]}]} [error] 2023-03-22T12:21:20.293511Z couchdb@couchdb.1 emulator -------- Error in process <0.3039.0> on node 'couchdb@couchdb.1' with exit value: {{badmatch,{error,{nodedown,<<"progress not possible">>}}},[{ddoc_cache_entry_validation_funs,recover,1,[{file,"src/ddoc_cache_entry_validation_funs.erl"},{line,33}]},{ddoc_cache_entry,do_open,1,[{file,"src/ddoc_cache_entry.erl"},{line,297}]}]} [error] 2023-03-22T12:26:18.360711Z couchdb@couchdb.1 <0.2708.0> -------- rexi_server: from: couchdb@couchdb.1(<0.2695.0>) mfa: fabric_rpc:all_docs/3 exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,265}]},{rexi,stream2,3,[{file,"src/rexi.erl"},{line,205}]},{fabric_rpc,view_cb,2,[{file,"src/fabric_rpc.erl"},{line,462}]},{couch_mrview,finish_fold,2,[{file,"src/couch_mrview.erl"},{line,682}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,140}]}] [error] 2023-03-22T12:26:18.368063Z couchdb@couchdb.1 <0.2707.0> -------- rexi_server: from: couchdb@couchdb.1(<0.2695.0>) mfa: fabric_rpc:all_docs/3 exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,265}]},{rexi,stream2,3,[{file,"src/rexi.erl"},{line,205}]},{fabric_rpc,view_cb,2,[{file,"src/fabric_rpc.erl"},{line,462}]},{couch_mrview,finish_fold,2,[{file,"src/couch_mrview.erl"},{line,682}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,140}]}] [error] 2023-03-22T12:26:18.374116Z couchdb@couchdb.1 <0.2711.0> -------- rexi_server: from: couchdb@couchdb.1(<0.2709.0>) mfa: fabric_rpc:all_docs/3 exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,265}]},{rexi,stream2,3,[{file,"src/rexi.erl"},{line,205}]},{fabric_rpc,view_cb,2,[{file,"src/fabric_rpc.erl"},{line,462}]},{couch_mrview,finish_fold,2,[{file,"src/couch_mrview.erl"},{line,682}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,140}]}] [error] 2023-03-22T12:26:18.799286Z couchdb@couchdb.1 <0.2793.0> -------- rexi_server: from: couchdb@couchdb.1(<0.2791.0>) mfa: fabric_rpc:all_docs/3 exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,265}]},{rexi,stream2,3,[{file,"src/rexi.erl"},{line,205}]},{fabric_rpc,view_cb,2,[{file,"src/fabric_rpc.erl"},{line,462}]},{couch_mrview,finish_fold,2,[{file,"src/couch_mrview.erl"},{line,682}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,140}]}] [error] 2023-03-22T12:26:18.802247Z couchdb@couchdb.1 <0.2796.0> -------- rexi_server: from: couchdb@couchdb.1(<0.2789.0>) mfa: fabric_rpc:all_docs/3 exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,265}]},{rexi,stream2,3,[{file,"src/rexi.erl"},{line,205}]},{fabric_rpc,view_cb,2,[{file,"src/fabric_rpc.erl"},{line,462}]},{couch_mrview,finish_fold,2,[{file,"src/couch_mrview.erl"},{line,682}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,140}]}] [error] 2023-03-22T12:26:18.804216Z couchdb@couchdb.1 <0.2794.0> -------- rexi_server: from: couchdb@couchdb.1(<0.2791.0>) mfa: fabric_rpc:all_docs/3 exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,265}]},{rexi,stream2,3,[{file,"src/rexi.erl"},{line,205}]},{fabric_rpc,view_cb,2,[{file,"src/fabric_rpc.erl"},{line,462}]},{couch_mrview,finish_fold,2,[{file,"src/couch_mrview.erl"},{line,682}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,140}]}] [error] 2023-03-22T12:26:18.804406Z couchdb@couchdb.1 <0.2797.0> -------- rexi_server: from: couchdb@couchdb.1(<0.2789.0>) mfa: fabric_rpc:all_docs/3 exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,265}]},{rexi,stream2,3,[{file,"src/rexi.erl"},{line,205}]},{fabric_rpc,view_cb,2,[{file,"src/fabric_rpc.erl"},{line,462}]},{couch_mrview,finish_fold,2,[{file,"src/couch_mrview.erl"},{line,682}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,140}]}] [error] 2023-03-22T12:26:19.300264Z couchdb@couchdb.1 <0.2867.0> -------- rexi_server: from: couchdb@couchdb.1(<0.2865.0>) mfa: fabric_rpc:all_docs/3 exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,265}]},{rexi,stream2,3,[{file,"src/rexi.erl"},{line,205}]},{fabric_rpc,view_cb,2,[{file,"src/fabric_rpc.erl"},{line,462}]},{couch_mrview,finish_fold,2,[{file,"src/couch_mrview.erl"},{line,682}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,140}]}] [error] 2023-03-22T12:26:19.300503Z couchdb@couchdb.1 <0.2868.0> -------- rexi_server: from: couchdb@couchdb.1(<0.2865.0>) mfa: fabric_rpc:all_docs/3 exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,265}]},{rexi,stream2,3,[{file,"src/rexi.erl"},{line,205}]},{fabric_rpc,view_cb,2,[{file,"src/fabric_rpc.erl"},{line,462}]},{couch_mrview,finish_fold,2,[{file,"src/couch_mrview.erl"},{line,682}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,140}]}] [error] 2023-03-22T12:26:19.803603Z couchdb@couchdb.1 <0.2946.0> -------- rexi_server: from: couchdb@couchdb.1(<0.2944.0>) mfa: fabric_rpc:all_docs/3 exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,265}]},{rexi,stream2,3,[{file,"src/rexi.erl"},{line,205}]},{fabric_rpc,view_cb,2,[{file,"src/fabric_rpc.erl"},{line,462}]},{couch_mrview,finish_fold,2,[{file,"src/couch_mrview.erl"},{line,682}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,140}]}] [error] 2023-03-22T12:26:19.803847Z couchdb@couchdb.1 <0.2947.0> -------- rexi_server: from: couchdb@couchdb.1(<0.2944.0>) mfa: fabric_rpc:all_docs/3 exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,265}]},{rexi,stream2,3,[{file,"src/rexi.erl"},{line,205}]},{fabric_rpc,view_cb,2,[{file,"src/fabric_rpc.erl"},{line,462}]},{couch_mrview,finish_fold,2,[{file,"src/couch_mrview.erl"},{line,682}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,140}]}] [error] 2023-03-22T12:26:20.294324Z couchdb@couchdb.1 <0.3043.0> -------- rexi_server: from: couchdb@couchdb.1(<0.3041.0>) mfa: fabric_rpc:all_docs/3 exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,265}]},{rexi,stream2,3,[{file,"src/rexi.erl"},{line,205}]},{fabric_rpc,view_cb,2,[{file,"src/fabric_rpc.erl"},{line,462}]},{couch_mrview,finish_fold,2,[{file,"src/couch_mrview.erl"},{line,682}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,140}]}] [error] 2023-03-22T12:26:20.299288Z couchdb@couchdb.1 <0.3044.0> -------- rexi_server: from: couchdb@couchdb.1(<0.3041.0>) mfa: fabric_rpc:all_docs/3 exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,265}]},{rexi,stream2,3,[{file,"src/rexi.erl"},{line,205}]},{fabric_rpc,view_cb,2,[{file,"src/fabric_rpc.erl"},{line,462}]},{couch_mrview,finish_fold,2,[{file,"src/couch_mrview.erl"},{line,682}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,140}]}] [error] 2023-03-22T12:26:20.311449Z couchdb@couchdb.1 <0.3045.0> -------- rexi_server: from: couchdb@couchdb.1(<0.3041.0>) mfa: fabric_rpc:all_docs/3 exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,265}]},{rexi,stream2,3,[{file,"src/rexi.erl"},{line,205}]},{fabric_rpc,view_cb,2,[{file,"src/fabric_rpc.erl"},{line,462}]},{couch_mrview,finish_fold,2,[{file,"src/couch_mrview.erl"},{line,682}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,140}]}] ```
dianabarsan commented 1 year ago

Hi @lorerod

I'll try to replicate this locally.

dianabarsan commented 1 year ago

Hi @lorerod I've replicated the issue locally, I'll be back soon with a fix.

dianabarsan commented 1 year ago

Hi @lorerod

I've made an update to the documentation to add an additional step to the migration for clustered. It involves running an additional command after the move-shards command.

For some reason, my haproxy process had trouble starting up every time, but it restarted itself automatically after 1 minute.

rsyslog startup failure, child did not respond within startup timeout (60 seconds)

This doesn't have anything to do with the migration, but it can be a nuisance.

Could you please try to integrate the node deletion step and wait for 1 minute after starting the 4.1 instance to check if it works?

Thanks a lot!

lorerod commented 1 year ago

Hi @dianabarsan It worked! I will leave the details tomorrow but I wanted to let you know.

lorerod commented 1 year ago

Environment: Same as in previous comment with the difference in the couchdb compose file: cht-couchdb-clustered.yml

Migrating from medic-os to clustered node 4.1.0 ✅ - I logged in 3.17 instance with users ac1 and ac2 on two different phones. - Installed CHT data migration tool. Provided Couchdb data: - `export CHT_NETWORK=migration_project_medic-net` - `export COUCH_URL=http://medic:password@migration_project_haproxy_1:443/` - I prepare CHT-Core 3.x installation for upgrading: - Stop the API by getting shell on your Medic OS container and calling /boot/svc-stop medic-api: ``` Debug: Service 'medic-api/medic-api' exited with status 143 Info: Service 'medic-api/medic-api' was stopped successfully Success: Finished stopping services in package 'medic-api` ``` - Initiated view indexing by running `docker-compose run couch-migration pre-index-views 4.1.0` - Save existent CouchDb configuration running `docker-compose run couch-migration get-env` obtained ``` COUCHDB_USER=medic COUCHDB_PASSWORD=password COUCHDB_SECRET=bb45d28ca97bdf024bbff179e0a165f7 COUCHDB_UUID=459be07c858612f49a796f8af462ea0d ``` - Backup /srv/storage/medic-core/couchdb/data in the medic-os container - At this point, I stopped my 3.17 medic os container. - Launch 4.x CouchDb installation - multi node: - I downloaded 4.x clustered CouchDb docker-compose in a directory: `curl -s -o ./cht-couchdb-clustered.yml https://staging.dev.medicmobile.org/_couch/builds_4/medic:medic:4.1.0/docker-compose/cht-couchdb-clustered.yml` - Create the data folders: ``` /couchdb/data/main /couchdb/data/secondary1 /couchdb/data/secondary2 ``` - Create a shards and a .shards directory in every secondary node folder - Copy the 3.17 backup data into the /couchdb/data/main directory - Set the env variables for couchdb cluster: ``` export COUCHDB_USER=medic export COUCHDB_PASSWORD=password export COUCHDB_SECRET=bb45d28ca97bdf024bbff179e0a165f7 export COUCHDB_UUID=459be07c858612f49a796f8af462ea0d export DB1_DATA=/Users/marialorenarodriguezviruel/medic-workspace/couchdb-cluster/couchdb-data/main export DB2_DATA=/Users/marialorenarodriguezviruel/medic-workspace/couchdb-cluster/couchdb-data/secondary1 export DB3_DATA=/Users/marialorenarodriguezviruel/medic-workspace/couchdb-cluster/couchdb-data/secondary2 ``` - Updated couchdb-migration environment variables with the 4.x information: ``` export COUCH_URL=http://medic:password@couchdb-cluster_couchdb.1_1:5984 export CHT_NETWORK=cht-net ``` - Started 4.1 couchdb with ` docker-compose -f cht-couchdb-clustered.yml up -d` inside the new project directory - Checked couchdb up with `docker-compose run couch-migration check-couchdb-up 3` inside couchdb-migration directory: ``` Creating couchdb-migration_couch-migration_run ... done Waiting for CouchDb to be ready... Waiting for CouchDb Cluster to be ready.... CouchDb Cluster is Ready ``` - Generate the shard distribution matrix and get instructions for final shard locations. ``` shard_matrix=$(docker-compose run couch-migration generate-shard-distribution-matrix) docker-compose run couch-migration shard-move-instructions $shard_matrix ``` - Manually move the shard files to the correct location - Change metadata to match the new shard distribution ``` docker-compose run couch-migration move-shards $shard_matrix Shards moved successfully ``` - Remove old node from the cluster: ``` Creating couchdb-migration_couch-migration_run ... done Node couchdb@127.0.0.1 was removed successfully ``` - docker-compose run couch-migration verify ``` ... Migration verification passed. ``` - Started CHT-Core 4.1 using `curl -s -o ./cht-core.yml https://staging.dev.medicmobile.org/_couch/builds_4/medic:medic:4.1.0/docker-compose/cht-core.yml` and `COUCHDB_SERVERS=couchdb.1,couchdb.2,couchdb.3` - Once the migration is successful: ✅ offline users should not need to resync data ✅ users should not get logged out ![Screenshot_20230328-160652](https://user-images.githubusercontent.com/21312057/228591370-51b18e21-9a21-47bf-9e38-0df8998cfa1c.jpg) ✅ offline users should be able to resync ![Screenshot_20230328-160628](https://user-images.githubusercontent.com/21312057/228591324-94b59cdf-7397-422f-a8eb-2c05e8bc60ea.jpg) ⚠️ All data should be available in the new instance. For this I compare the database information json and the fauxton database info. I found some differences in the json, but `doc_count` is the same in both instances. - Fauxton database information 3.17: ![317](https://user-images.githubusercontent.com/21312057/228592123-f6f8e957-396e-4ed9-827c-e1e41f50ee6d.png) - Database info JSON 3.17: ``` { "db_name":"medic", "purge_seq":"0-g1AAAAFTeJzLYWBg4MhgTmEQTM4vTc5ISXIwNDLXMwBCwxygFFOSApBMsv___39WIgMedYkMSfIEFeWxAEmGBiAFVDofv9okB5DF8USauQBi5n4CZiaAzKwn0swHEDOJUnsAovY-UG0WAApQXo4", "update_seq":"12213-g1AAAAFreJzLYWBg4MhgTmEQTM4vTc5ISXIwNDLXMwBCwxygFFMiQ5L8____s5IYGNh68ahLUgCSSfYwpWr4lDqAlMbDlEbjU5oAUloPVco6EY_SPBYgydAApICq54NNFiSofAFE-X6w6QcJKj8AUX4frLyfoPIHEOUQt2_NAgBglmJ3", "sizes":{ "file":133301952, "external":124029761, "active":119027566 }, "other":{ "data_size":124029761 }, "doc_del_count":15, "doc_count":10543, "disk_size":133301952, "disk_format_version":7, "data_size":119027566, "compact_running":false, "cluster":{ "q":8, "n":1, "w":1, "r":1 }, "instance_start_time":"0" } ``` - Fauxton database information 4.1.0: ![410](https://user-images.githubusercontent.com/21312057/228592417-2e76c5b7-cc6b-4eee-b96c-9e705f083d1d.png) - Database info JSON 4.1.0: ``` { "db_name":"medic", "purge_seq":"0-g1AAAAFTeJzLYWBg4MhgTmEQTM4vTc5ISXKA0npGOUAppjwWIMnQAKT-__8_PyuRAY_aJAUgmWQPVIhfHcTMBxAzcao1RFJ7AKL2Pn61SQkg--sJmpnIkCSPR5ExxDAHkGHxBNVBHLgA4sD9QLVZABSLaqU", "update_seq":"12886-g1AAAAFreJzLYWBg4MhgTmEQTM4vTc5ISXKA0nqGOUAppkSGJPn___9nJTEwsM3Eqs4IrC5JAUgm2cOUGmJVagxR6gBSGg9TWoPH9qQEkNJ6qFLWO3gckMcCJBkagBRQ9Xywyc14HAFRvgCifD9YeQgeh0CUH4Aovw9WrkrQMQ8gyiHetMgCANqsbjI", "sizes":{ "file":11798448, "external":15129047, "active":10595509 }, "other":{ "data_size":15129047 }, "doc_del_count":15, "doc_count":10544, "disk_size":11798448, "disk_format_version":7, "data_size":10595509, "compact_running":false, "cluster":{ "q":8, "n":1, "w":1, "r":1 }, "instance_start_time":"0" } ```

@dianabarsan I was able to migrate successfully!

dianabarsan commented 1 year ago

I have merged the documentation PR. Closing this as completed.