Open kaushalkumar opened 2 years ago
Hi @kaushalkumar,
you did nothing wrong, it's the backup-cluster
which is badly named. Medusa performs backups per datacenter, not per cluster. If you have multiple DCs, you'll have to back them up separately by running the operation in all of them.
We will very soon rename it to backup-datacenter
to make it more obvious.
Hi @adejanovski - Thanks for responding. It was helpful
While waiting for the response, we tried to look into the code. I am not an expert in python and medusa, but it seems that an enhancement in cassandra_utils https://github.com/thelastpickle/cassandra-medusa/blob/master/medusa/cassandra_utils.py#L171 might enable this functionality. Perhaps instead of renaming, it might be worth to explore supporting this feature by enhancing/overloading the backup-cluster api to provide an option of backup at dc level or cluster level.
This enhancement will demand changes in other areas, like restore, list, delete etc. So, i think you will be the best person to decide if this change is adaptable (based on medusa's design) or not.
Project board link
Hi - We are trying to backup data (of Cassandara database) from a multi data centre cluster using medusa (local mode). The backup is created for dc1 (all nodes) but none of the nodes of dc2 are backed-up. We have tried different configurations, but have not got success.
It seems medusa is using cassandra python driver to discover the nodes in both cluster, but somehow it is not able to discover dc2 nodes.
Can you please check and let us know what could be missing. If there is any readme/blog for this use case, then please do point us to that. It would be help us.
Configuration
Version: [cqlsh 5.0.1 | Cassandra 3.11.11 | CQL spec 3.4.4 | Native protocol v4], Medusa [0.13.3]
medusa.ini:
Nodetool Status:
Medusa Backup
sudo medusa --verbosity backup-cluster --backup-name=data070820221111 --mode=full
Please do let us know if any other information is required.
┆Issue is synchronized with this Jira Story by Unito