wildfly / wildfly-operator

Kubernetes Operator for WildFly
http://docs.wildfly.org/wildfly-operator/
Apache License 2.0
32 stars 39 forks source link

Add the ability to mount multiple persistent volumes by using the EAP operator #190

Open yersan opened 3 years ago

yersan commented 3 years ago

Overview

At the moment there are no possibilities to mount arbitrary persistent volumes to the server pods created by the Operator.

The Operator allows the configuration of a persistent volume for the ${jboss.server.data.dir} by using the storage attribute on the Custom Resource Definition (CRD). The persistent Volume Claim (PVC) to request a volume binding for the server data directory is automatically created by the Operator. This volume is never shared by other pod replicas.

Users could have other requirements, for example, they could have the need to mount additional volumes on specific paths outside of the server directories, and optionally, want to have this volume shared across all the replicas of the server pod by using an existing Persistent Volume Claim available in the pod namespace.

The goal of this feature is to expose to the Operator CRD the uses of the standard Kubernetes PersistentVolumeClaim and VolumeMount elements so the users can add PersistentVolumenClaims per pod and mount them into the server pod. We will leave the current storage configuration to handle only the ${jboss.server.data.dir} persistent volume.

Optionally we could also include the possibility to add any volume type supported by the cloud provider by configuring Volume and make them available as shared volume.

Issue Metadata

https://issues.redhat.com/browse/EAP7-1675

Related Issues

Dev Contacts

jacopotessera (Community user)

QE Contacts

TBD

Testing By

TBD

Affected Projects or Components

WildFly Operator

Other Interested Projects

N/A

Requirements

Hard Requirements

Configuration example

spec:
  applicationImage: "....."
  replicas: 2
  volumeClaimTemplates:
  - name: log-storage
     accessModes: [ "ReadWriteOnce" ]
     storage: 1Gi
     mountPath: /var/logs

The following configuration will create the following PersistenVolumenClaims: log-storage-0 (bound always to the first replica) log-storage-1 (bound always to the second replica) The volume of each claim will be mounted at /var/logs The storage is not shared across pod replicas

Nice-to-Have Requirements

Configuration example

spec:
  applicationImage: "....."
  replicas: 2
  volumes:
  - name: shared-storage
    persistentVolumeClaim:
       claimName: shared-storage-pvc
    mountPath: /usr/share

The following configuration will not create any PVC. It assumes there is a PVC named shared-storage-pvc available on the namespace where the CR is being created. The volume of the claim will be mounted at /usr/share The storage is shared across all pod replicas

Non-Requirements

N/A

Test Plan

Community Documentation

The user guide documentation and the WildFlyServer CRD documentation will involve reflecting the changes introduced by this RFE.

Release Note Content

Added the ability to mount additional volumes to the server pod

jmesnil commented 3 years ago

As a part of that RFE, should we also allow configuring volumeClaimTemplates for the statefulset?

jmesnil commented 3 years ago

Ideally, as a part of this RFE, I would like to deprecate the StorageSpec but we would need a way to properly configure the HOME directory in the bootable jar case to be able to mount the volume at the right location (corresponding to server.data.dir)

yersan commented 3 years ago

Ideally, as a part of this RFE, I would like to deprecate the StorageSpec but we would need a way to properly configure the HOME directory in the bootable jar case to be able to mount the volume at the right location (corresponding to server.data.dir)

One thing we have to pay attention to the ability to configure the server.data.dir is that this directory must not be shared across server replicas. Each pod should get its own directory. For this reason, I initially saw the StorageSpec as a good thing to keep under control and dedicated to the server data storage only. Its name is not very descriptive for this unique functionality though.

If we deprecate it in favor of volume/volume mounts which are controlled by the users, we should try to avoid letting the users choose any existing PVC and try to keep the configuration under control to avoid unwanted situations.

yersan commented 3 years ago

As a part of that RFE, should we also allow configuring volumeClaimTemplates for the statefulset?

@jmesnil It would be useful to the users as well. We could embrace two use cases here then:

  1. Be able to share the same PVC across all statefulset instances. A nice to have here would be to have a VolumeClaim configuration available at the CRD as well, so users that want to cover this use-case could define the PVC configuration in the CRD directly without needing to create the PCV manually on the cluster.
  2. Be able to create PVCs, via volumeClaimTemplates. By using the volumeClaimTemplates each storage will be independent and it will not be shared across server instances.

We can cover both on the same RFE.

jmesnil commented 3 years ago

For this RFE, we should focus on #2 to have separate persistent volume for each pod.

Shared storage (#1) might be useful in general but it is better addressed with something on top of the raw volumes (a DB, a shared cache)

yersan commented 3 years ago

@jmesnil I added the shared storage as a nice to have.