solidfire / solidfire-cli

https://solidfire.github.io/solidfire-cli
Apache License 2.0
6 stars 6 forks source link

API request for backup volumes/snapshot to S3 object store? #33

Open devops-42 opened 5 years ago

devops-42 commented 5 years ago

Hi,

how can I use the "Backup to" functionality via CLI in order to initiate volumes/snapshot backup into a defined S3 object store?

Thanks for your help!

scaleoutsean commented 4 years ago

It seems this isn't documented (see #32 ).

This is an example you can get by enabling API logging in the UI and using the feature to backup to S3 (one of the options):

{
  "id": 126,
  "method": "StartBulkVolumeRead",
  "params": {
    "volumeID": 5,
    "format": "native",
    "script": "bv_internal.py",
    "scriptParameters": {
      "range": {
        "lba": 0,
        "blocks": 17090048
      },
      "write": {
        "awsAccessKeyID": "123123123",
        "awsSecretAccessKey": "41231231234",
        "bucket": "backoops",
        "prefix": "myClusterName-k3z3/boot-5",
        "endpoint": "s3",
        "format": "native",
        "hostname": "s3.my.org"
      }
    }
  }
}

It seems the params would be (I omitted explaining the obvious ones):

You'd have to get some of these values from CLI or config files, calculate some (like number of blocks) and build a nested dictionary of params:

{
  "volumeID": 5,
  "format": "native",
  "script": "bv_internal.py",
  "scriptParameters": {
    "range": {
      "lba": 0,
      "blocks": 17090048
    },
    "write": {
      "awsAccessKeyID": "123123123",
      "awsSecretAccessKey": "41231231234",
      "bucket": "backoops",
      "prefix": "myClusterName-k3z3/boot-5",
      "endpoint": "s3",
      "format": "native",
      "hostname": "s3.my.org"
    }
  }
}

Assuming the above is in /tmp/params.txt you could build parameters from bottom up (by going in reverse)

#/usr/bin/python3
import json
data = open('/tmp/params.txt')
json_array = json.load(data)
json_array
print(json_array['scriptParameters'])
# {'range': {'lba': 0, 'blocks': 17090048}, 'write': {'awsAccessKeyID': '123123123', 'awsSecretAccessKey': '41231231234', 'bucket': 'backoops', 'prefix': 'myClusterName-k3z3/boot-5', 'endpoint': 's3', 'format': 'native', 'hostname': 's3.my.org'}}
print(json_array['scriptParameters']['range'])
# {'lba': 0, 'blocks': 17090048}
print(json_array['scriptParameters']['write'])
# {'awsAccessKeyID': '123123123', 'awsSecretAccessKey': '41231231234', 'bucket': 'backoops', 'prefix': 'myClusterName-k3z3/boot-5', 'endpoint': 's3', 'format': 'native', 'hostname': 's3.my.org'}

You can try to backup and restore a volume from the UI and see what your parameters to backup and restore should be.

For more than 2-3 VMs I would suggest to use a backup software, either free or commercial.

drose12 commented 2 years ago

Any update on this ? I too am having issues getting this to work.

scaleoutsean commented 2 years ago

Any update on this ? I too am having issues getting this to work.

What doesn't work?

The same recipe given in README.md should work with Backup to S3, just pack example parameters from JSON above into parameters as per the example from readme: sfcli -c 0 SFApi Invoke --method GetAccountByID --parameters "{\"accountID\":94}"

drose12 commented 2 years ago

the range lba and blocks part does not work I am attempting to just do it in python SDK instead

scaleoutsean commented 2 years ago

the range lba and blocks part does not work I am attempting to just do it in python SDK instead

The way the CLI works for me is I use SFApi Invoke, lba 0 (as I back up entire volumes) and volSizeBytes/4096 for the number of blocks. I described my approach here

drose12 commented 2 years ago

Thank you, yes I found this and I'm choosing the Python method as it works and allows for some more sophistication.