Percona-Lab / mongodb_consistent_backup

A tool for performing consistent backups of MongoDB Clusters or Replica Sets
https://www.percona.com
Apache License 2.0
276 stars 80 forks source link

when i was backup a mongodb sharding cluster ,it cames with authenticate error . #54

Closed FafaData closed 7 years ago

FafaData commented 7 years ago

here is the logs [admin@iZ23eo8z1y0Z mongodb_consistent_backup-master]$ /usr/local/bin/mongodb-consistent-backup -P 30001 -u dbadmin -p 'Tg@Rs12fg' -n fafabackup -l /alidata1/admin/backup [2016-10-28 15:02:47,423] [INFO] [MainProcess] [Backup:run:220] Starting mongodb-consistent-backup version 0.3.1 (git commit hash: GIT_COMMIT_HASH) [2016-10-28 15:02:47,424] [INFO] [MainProcess] [Backup:run:267] Running backup of localhost:30001 in sharded mode [2016-10-28 15:02:47,447] [INFO] [MainProcess] [Sharding:get_start_state:41] Began with balancer state running: True [2016-10-28 15:02:47,526] [INFO] [MainProcess] [Sharding:get_config_server:129] Found sharding config server: cfg1/10.139.55.215:20001

**[2016-10-28 15:03:17,759] [CRITICAL] [MainProcess] [DB:auth_if_required:42] Unable to authenticate with host cfg1/10.139.55.215:20001: cfg1/10.139.55.215:20001: [Errno -2] Name or service not known

[2016-10-28 15:03:17,760] [CRITICAL] [MainProcess] [Sharding:get_config_server:142] Unable to locate config servers for localhost:30001! [2016-10-28 15:03:17,760] [ERROR] [MainProcess] [Backup:exception:199] Problem getting shard secondaries! Error: cfg1/10.139.55.215:20001: [Errno -2] Name or service not known Traceback (most recent call last):** File "/home/admin/.pex/install/MongoBackup-0.3.1-py2-none-any.whl.32e77b741ee1b2f9541da2e6824ccd221ede9f39/MongoBackup-0.3.1-py2-none-any.whl/MongoBackup/Backup.py", line 293, in run self.secondaries = self.replset_sharded.find_secondaries() File "/home/admin/.pex/install/MongoBackup-0.3.1-py2-none-any.whl.32e77b741ee1b2f9541da2e6824ccd221ede9f39/MongoBackup-0.3.1-py2-none-any.whl/MongoBackup/ReplsetSharded.py", line 68, in find_secondaries for rs_name in self.get_replsets(): File "/home/admin/.pex/install/MongoBackup-0.3.1-py2-none-any.whl.32e77b741ee1b2f9541da2e6824ccd221ede9f39/MongoBackup-0.3.1-py2-none-any.whl/MongoBackup/ReplsetSharded.py", line 59, in get_replsets configsvr = self.sharding.get_config_server() File "/home/admin/.pex/install/MongoBackup-0.3.1-py2-none-any.whl.32e77b741ee1b2f9541da2e6824ccd221ede9f39/MongoBackup-0.3.1-py2-none-any.whl/MongoBackup/Sharding.py", line 143, in get_config_server raise e ServerSelectionTimeoutError: cfg1/10.139.55.215:20001: [Errno -2] Name or service not known [2016-10-28 15:03:17,761] [INFO] [MainProcess] [Backup:cleanup_and_exit:171] Starting cleanup and exit procedure! Killing running threads [2016-10-28 15:03:17,761] [INFO] [MainProcess] [Sharding:restore_balancer_state:82] Restoring balancer state to: True [2016-10-28 15:03:17,764] [INFO] [MainProcess] [Backup:cleanup_and_exit:194] Cleanup complete. Exiting

please help me , i do not konw why..... it said that 'Unable to authenticate with host cfg1/10.139.55.215:20001', but all the cluster and shard had the same user and password.

timvaillancourt commented 7 years ago

Hi @FafaData,

I tested with version 0.3.1 and I was unable to reproduce your issue and I am always able to authenticate to config servers.

At first I thought it was related to IP-addresses being used, but after many tests that does not seem to be the case, we pass the exact string the mongos is using for it's config dbs to the Mongo driver without changing the value.

From the error it seems that your IP is being treated as a DNS name but I'm not yet sure why. To help, can you provide these outputs:

  1. The full output of getCmdLineOpts on the mongos the backup tool is pointed at: https://docs.mongodb.com/manual/reference/command/getCmdLineOpts/
  2. The output of rs.conf(): https://docs.mongodb.com/manual/reference/method/rs.conf/
  3. Full output from mongodb-consistent-backup using the "--verbose" flag.
  4. The .getUser() output for the user on the config server with the problem, 10.139.55.215:20001 (remove anything sensitive): https://docs.mongodb.com/v3.2/reference/method/db.getUser/

Looking forward to this added info, thanks!

FafaData commented 7 years ago

Hi Thank you very much for your help. here are my details! 1, The full output of getCmdLineOpts on the mongos the backup tool is pointed at: mongos> db.runCommand({ getCmdLineOpts: 1 }) { "argv" : [ "mongos", "-f", "/alidata1/app/mongodb/config/mongod-r.conf" ], "parsed" : { "config" : "/alidata1/app/mongodb/config/mongod-r.conf", "net" : { "port" : 30001 }, "processManagement" : { "fork" : true }, "sharding" : { "configDB" : "cfg1/10.139.55.215:20001,10.139.55.215:20011,10.139.55.215:20111" }, "systemLog" : { "destination" : "file", "logAppend" : true, "path" : "/alidata1/app/mongodb/log/mongos-r.log" } }, "ok" : 1 } 2,The output of rs.conf(): There are three shards in cluster . one of it is : rs1:PRIMARY> rs.conf() { "_id" : "rs1", "version" : 1, "protocolVersion" : NumberLong(1), "members" : [ { "_id" : 0, "host" : "10.139.55.215:10001", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 3, "tags" : {

        },
        "slaveDelay" : NumberLong(0),
        "votes" : 1
    },
    {
        "_id" : 1,
        "host" : "10.139.55.215:10011",
        "arbiterOnly" : false,
        "buildIndexes" : true,
        "hidden" : false,
        "priority" : 2,
        "tags" : {

        },
        "slaveDelay" : NumberLong(0),
        "votes" : 1
    },
    {
        "_id" : 2,
        "host" : "10.139.55.215:10111",
        "arbiterOnly" : false,
        "buildIndexes" : true,
        "hidden" : false,
        "priority" : 1,
        "tags" : {

        },
        "slaveDelay" : NumberLong(0),
        "votes" : 1
    }
],
"settings" : {
    "chainingAllowed" : true,
    "heartbeatIntervalMillis" : 2000,
    "heartbeatTimeoutSecs" : 10,
    "electionTimeoutMillis" : 10000,
    "getLastErrorModes" : {

    },
    "getLastErrorDefaults" : {
        "w" : 1,
        "wtimeout" : 0
    },
    "replicaSetId" : ObjectId("58130e0d77c25e5a51bcbdff")
}

} 3,Full output from mongodb-consistent-backup using the "--verbose" flag. $ /usr/local/bin/mongodb-consistent-backup -H 10.139.55.215 -a admin -P 30001 -u dbadmin -p '**_' -n fafabackup4 -l /alidata1/admin/backup [2016-11-01 10:33:26,812] [INFO] [MainProcess] [Backup:run:220] Starting mongodb-consistent-backup _version 0.3.1 (git commit hash: GIT_COMMIT_HASH) 4,The .getUser() output for the user on the config server with the problem, 10.139.55.215:20001 (remove anything sensitive): cfg1:PRIMARY> db.getUser("dbadmin") null

My cluster structure is like this 1,In my cluster there have three data nodes, each with three replica sets.Respectively called rs1,rs2,rs3. and have a replica sets config server with three replica sets. called cfg1 . and have a one nodes mongos. 2, I create users dbadmin in mongos, and create the same user in rs1,rs2,rs3. all of the user dbadmin have the same password.they permissions are like this: { "_id" : "admin.dbadmin", "user" : "dbadmin", "db" : "admin", "roles" : [ { "role" : "userAdminAnyDatabase", "db" : "admin" }, { "role" : "root", "db" : "admin" } ] } 3, using keyfile authenticate. my mongodb version is 3.2.10

What i was tested 1, i just Note off authentication like this

keyFile = /alidata1/admin/mongodb_x64_3.2.9//keyfile/key-01

clusterAuthMode = keyFile

and then i was restart the cluster ,then run the mongodb-consistent-backup without user and password. [admin@iZ23eo8z1y0Z consistent-backup]$ /usr/local/bin/mongodb-consistent-backup -P 30001 -n fafabackup1 -l /alidata1/admin/backup it was successful ! but this when i was run mongodb-consistent-backup with user and password. [admin@iZ23eo8z1y0Z 20161031_1808]$ /usr/local/bin/mongodb-consistent-backup -P 30001 -u dbadmin -p '****' -n fafabackup3 -l /alidata1/admin/backup it goes the same error [2016-10-31 18:50:22,539] [CRITICAL] [MainProcess] [DB:auth_if_required:42] Unable to authenticate with host cfg1/10.139.55.215:20001: cfg1/10.139.55.215:20001: [Errno -2] Name or service not known [2016-10-31 18:50:22,539] [CRITICAL] [MainProcess] [Sharding:get_config_server:142] Unable to locate config servers for localhost:30001! [2016-10-31 18:50:22,539] [ERROR] [MainProcess] [Backup:exception:199] Problem getting shard secondaries! Error: cfg1/10.139.55.215:20001: [Errno -2] Name or service not known Traceback (most recent call last): File "/home/admin/.pex/install/MongoBackup-0.3.1-py2-none-any.whl.32e77b741ee1b2f9541da2e6824ccd221ede9f39/MongoBackup-0.3.1-py2-none-any.whl/MongoBackup/Backup.py", line 293, in run self.secondaries = self.replset_sharded.find_secondaries() File "/home/admin/.pex/install/MongoBackup-0.3.1-py2-none-any.whl.32e77b741ee1b2f9541da2e6824ccd221ede9f39/MongoBackup-0.3.1-py2-none-any.whl/MongoBackup/ReplsetSharded.py", line 68, in find_secondaries for rs_name in self.get_replsets(): File "/home/admin/.pex/install/MongoBackup-0.3.1-py2-none-any.whl.32e77b741ee1b2f9541da2e6824ccd221ede9f39/MongoBackup-0.3.1-py2-none-any.whl/MongoBackup/ReplsetSharded.py", line 59, in get_replsets configsvr = self.sharding.get_config_server() File "/home/admin/.pex/install/MongoBackup-0.3.1-py2-none-any.whl.32e77b741ee1b2f9541da2e6824ccd221ede9f39/MongoBackup-0.3.1-py2-none-any.whl/MongoBackup/Sharding.py", line 143, in get_config_server raise e ServerSelectionTimeoutError: cfg1/10.139.55.215:20001: [Errno -2] Name or service not known

What I want to know is

1, Can mongodb shard cluster allow incremental backups using mongodb-consistent-backup? 2, If a cluster very large amount of data,Can the backup file saving in own nodes. Do you have anything experience in TB level data?

Be deeply grateful thank you!

FafaData commented 7 years ago

Hi: I have seen your PDF say to be able to support incremental backups image but when i was run the mongodb-consistent-backup without user and password, It seems to be a full amount of backup every time, and no incremental backup Be deeply grateful thank you!

timvaillancourt commented 7 years ago

Thanks for this info @FafaData.

Pull request #55 should fix this issue. Until it is merged, you can test with this branch here: https://github.com/timvaillancourt/mongodb_consistent_backup/tree/issue54_config_replset.

As for "incremental backups", the slide you're quoting is the long-term vision for the backup tool, not what exists today. We do not currently support incremental backups, it is just in our future plans.

dbmurphy commented 7 years ago

👍 Sorry for the confusion @FafaData I do have a webinar today where I will be talking more about the backups ( and the exact deck you referenced. I would be glad to answer any questions you have. However do get proper incremental backups a much more comprehensive backup scheduling and oplog record daemon is needed. It was felt getting a consistent backup was the first step toward a full system that is pure open-source.

https://www.percona.com/resources/webinars/mongodb-backups-all-grown Is the link to the webinar.

FafaData commented 7 years ago

thank you for your help Very much look forward to your mongodb cluster incremental backup method!!!

madvimer commented 6 years ago

Hello, is there any update on incremental backup implementation ?