Closed drose12 closed 2 years ago
I see from the GUI where it gets it's numbers from, if you get volume details from the GUI you see
Non-Zero Blocks 8,492,761 Zero Blocks 1,272,871
Summing these two you get 9,765,632
That's interesting.
I updated my Invoke-SFApi
-based examples on the blog and tested with both 512e and 4096 based volumes - when range (size in 4kB blocks) is specified, backup-to-S3 log entries from Invoke-SFApi are identical to what you get when you backup from the Web UI. When it's not (range params are optional), then you see that issue.
So for now it seems safe to work around this by using that approach.
Have you tried to do a restore? I suspect may be just a cosmetic (logging) issue, but the main concern is obviously whether such backups (without the range setting) are valid.
I gave this a try and it seems just a cosmetic problem.
How I tested:
No matter whether I backup from the Web UI or CLI (w and w/o range params), all checksums are identical.
Workaround for those who rely on the logs being correct is to provide correct range params (I have an example in the "backup to S3" blog post). On restore to a larger volume, I don't expect the absence of range params would matter, but maybe the log would have a nonsense figure as well (at the same time, restore ought to work). I haven't tested this because people who restore to a larger volume should use the range param as a matter of best practice.
@arjun960 can you confirm and create an issue for the API if need be?
@drose12 The issue is fixed, now we get the same number of blocks even through powershell. The changes will be available in the next SDK release.
Closing the issue.
I use the SolidFire GUI to backup to S3 on the same volume ID, the key difference I see is the block numbers are different between the two methods
Via the GUI:
{ "id": 54, "method": "StartBulkVolumeRead", "params": { "volumeID": 36, "format": "native", "script": "bv_internal.py", "scriptParameters": { "range": { "lba": 0, "blocks": 9765632 }, "write": { "awsAccessKeyID": "solidfire", "awsSecretAccessKey": "****", "bucket": "solidfire_backups", "prefix": "redacted-efsl/redacted-36", "endpoint": "s3", "format": "native", "hostname": "redacted" } } } }
Via PowerShell
Start-SFVolumeBackup -VolumeID $volumeid
-Format native -BackupTo S3
-Hostname redacted.com-AccessKeyID solidfire
-SecretAccessKey redacted-Verbose
-Bucket solidfire_backupsVerbose shows:
... some output deleted ...
REQUEST {"id":6,"method":"StartBulkVolumeRead","params":{"volumeID":36,"format":"native","script":"bv_internal.py","scriptParameters":{"range":{"blocks":244224,"lba":0},"write":{"format":"native","awsAccessKeyID":"solidfire","prefix":"redacted-36","endpoint":"s3","hostname":"redacted.com","bucket":"solidfire_backups","awsSecretAccessKey":"redacted"}}}}
The key difference I see here is blocks in the GUI API response is 9765632 but the second is 244224
of course the second is much smaller and while it completes is clearly not all the data when I look at the content in S3