k8ssandra / k8ssandra-operator

The Kubernetes operator for K8ssandra
https://k8ssandra.io/
Apache License 2.0
169 stars 78 forks source link

Medusa Ceph S3 support (`s3_rgw`) #832

Closed caniko closed 1 year ago

caniko commented 1 year ago

What is missing? Medusa fails to find container k8ssandra-medusa when using s3_rgw.

Environment

Anything else we need to know?: Medusa cannot find k8ssandra-medusa:

[2023-01-29 10:51:34,050] DEBUG: http://rook-ceph-rgw-nautiluss3.rook:80 "HEAD /k8ssandra-medusa HTTP/1.1" 404 0

Error:

Traceback (most recent call last):
  File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/cassandra/medusa/service/grpc/server.py", line 349, in <module>
    server.serve()
  File "/home/cassandra/medusa/service/grpc/server.py", line 65, in serve
    medusa_pb2_grpc.add_MedusaServicer_to_server(MedusaService(config), self.grpc_server)
  File "/home/cassandra/medusa/service/grpc/server.py", line 104, in __init__
    self.storage = Storage(config=self.config.storage)
  File "/home/cassandra/medusa/storage/__init__.py", line 75, in __init__
    self.storage_driver = self._connect_storage()
  File "/home/cassandra/medusa/storage/__init__.py", line 91, in _connect_storage
    return S3RGWStorage(self._config)
  File "/home/cassandra/medusa/storage/abstract_storage.py", line 40, in __init__
    self.bucket = self.driver.get_container(container_name=config.bucket_name)
  File "/home/cassandra/.local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 360, in get_container
    container_name=container_name)
libcloud.storage.types.ContainerDoesNotExistError: <ContainerDoesNotExistError in <libcloud.storage.drivers.rgw.S3RGWStorageDriver object at 0x7fe212a42940>, container=k8ssandra-medusa, value=None>

The medusa component of my k8ssandra cluster manifest, in terraform:

medusa = {
      storageProperties = {
        # Can be either of local, google_storage, azure_blobs, s3, s3_compatible, s3_rgw or ibm_storage
        storageProvider = "s3_compatible"
        storageSecretRef = {
          # Name of the secret containing the credentials file to access the backup storage backend
          name = "ceph-s3-key"
        }
        # Name of the storage bucket
        bucketName = "k8ssandra-medusa"
        # Prefix for this cluster in the storage bucket directory structure, used for multitenancy
        prefix = "nautilus"
        # Host to connect to the storage backend (Omitted for GCS, S3, Azure and local).
        host = "rook-ceph-rgw-nautiluss3.rook"
        # Whether or not to use SSL to connect to the storage backend
        secure = false
        # Maximum backup age that the purge process should observe.
        # 0 equals unlimited
        maxBackupAge = 0

        # Maximum number of backups to keep (used by the purge process).
        # 0 equals unlimited
        maxBackupCount = 0

        # AWS Profile to use for authentication.
        # apiProfile =
        transferMaxBandwidth = "50MB/s"

        # Number of concurrent uploads.
        # Helps maximizing the speed of uploads but puts more pressure on the network.
        # Defaults to 1.
        concurrentTransfers = 1

        # File size in bytes over which cloud specific cli tools are used for transfer.
        # Defaults to 100 MB.
        multiPartUploadThreshold = 104857600

        # Age after which orphan sstables can be deleted from the storage backend.
        # Protects from race conditions between purge and ongoing backups.
        # Defaults to 10 days.
        backupGracePeriodInDays = 10
        }
      }
caniko commented 1 year ago

I made a mistake and forgot to make the bucket :man_facepalming: