Open dantman opened 4 years ago
Sure, that makes sense. You are thinking in terms of upload target, correct?
If you are familiar, open a PR?
I'm mildly familiar. I'll probably just be using the gsutil cp
command (following the docs) the same way the container uses the aws cli and figuring out how to install it in the container.
How do I run a test with s3 that I can update to also do GCP?
Sounds as good as anything else. 😃
As for testing, I'm not sure. We use lphoward/fake-s3
to emulate an object store. Worth checking to see if it faithfully represents GCP store as well. If not, definitely open to replace it with something that does both.
The bigger question is how the backup will know, "I'm talking to AWS or compatible" vs "I'm talking to GCP or compatible". Right now it relies on the protocol part of the target URL, ie s3://
. Is there an alternative commonly used for GCP?
Yup, gs://DESTINATION_BUCKET_NAME/
.
A quick search suggests fsouza/fake-gcs-server
exists which might be worth a try.
You might not want to use port 445
in the test. make test
fails for me because of that.
I would love GCS support, I am probably wrong but if we can create an SMB volume with GCS : https://cloud.google.com/solutions/partners/netapp-cloud-volumes/creating-smb-volumes#creating_an_smb_volume Can't we use the SMB support , provided by databacker? _SMB: If the value of DB_DUMPTARGET is a URL of the format smb://hostname/share/path/ then it will connect via SMB.
I would guess so.
Hoping someone opens a PR on this.
Any thoughts on adding GCP cloud files support, via
gsutil
(like howaws-cli
is used)?It's not possible to use
mysql-backup
in GCP environments because neither GCP's s3 compat layer or s3proxy work correctly with the multipart uploadsaws-cli
does for large dump files.