Open anilarora opened 5 years ago
@anilarora Can you provide the use case?
I'm closing this for now given how old the request is, and given that this the first time I've run across such a request. But please do feel free to file a new Issue if you can provide a use case.
For future questions of this nature please feel free to use WKO's public #operator slack channel at oracle-weblogic.slack.com (sign-up here: https://weblogic-slack-inviter.herokuapp.com/). The slack channel is closely monitored, and tends to get a quicker response in comparison to filing an Issue.
@rjeberhard FYI
@tbarnes-us So, the use case we were thinking was that we would want to have a persistent volume be created per managed server to hold some data for the pod. In the our use case, we want to create a block volume to hold large amounts of temp runtime data, for example large data upload data. Another use case would be use create a block volume to host the domain home for the managed server, which also includes the log files which might get pretty big. In most cases, the boot volume does not have sufficient space for this ephemeral data.
For environments with more than 1 managed server, this means that we have to pre-create each volume with a specific naming pattern, and use that name match to tied that volume. This particular use case would allow us to use a template for the claim instead of having to precreate it.
Re-opening and assigning to Monica & Ryan for triage.
This is also getting discussed on internal slack.
Ryan & Monica: You may want to hold off on triaging this for a bit -- Anil kindly plans to add a couple of examples that compare the template approach vs. using work-arounds.
So, let's take the example where we wanted to create a persistent volume that held "temp" data, as the boot volume may not have sufficient space to hold that data. In our use case, this temp data needs to be unique to the serverPod, and in a multi-tenant environment, we would not want cross contamination between different domains as that could be a security issue (making host volume not an option)
In our ideal case, we would have the following definition for the weblogic domain resource (some items omitted for brevity):
apiVersion: "weblogic.oracle/v8"
kind: Domain
metadata:
name: essbase1
labels:
weblogic.domainUID: essbase1
spec:
domainUID: essbase1
domainHome: "/u01/config/domains/essbase_domain"
domainHomeSourceType: PersistentVolume
image: oracle/essbase:1.0
clusters:
- clusterName: essbase_cluster
replicas: 2
serverStartState: "RUNNING"
serverPod:
initContainers:
- name: server-init
image: oracle/essbase:1.0
command: [ "/bin/sh" ]
args: [ "rm -rf /u01/tmp/*" ]
volumeMounts:
- name: temp-volume
mountPath: /u01/tmp
volumeClaimTemplates:
- metadata:
name: temp-volume
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "oci-bv"
resources:
requests:
storage: 50Gi
volumeMounts:
- mountPath: /u01/tmp
name: temp-volume
So, as number of replicas changes, the persistent volume claims can grow accordingly. In this particular use case, as temp data can be deleted when the serverPod is deleted, we could potentially use the CSI Empheral Inline Volume construct so that the lifecycle of the volume is tied to the lifecycle of the serverPod, avoiding the need to have an init container to clean up the tmp directory as well.
The alternate approach to the above would require us to create each persistent volume claim ahead of time, requiring them to match a specific pattern.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: essbase1-temp-volume-essbase-server1
spec:
storageClassName: "oci-bv"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: essbase1-temp-volume-essbase-server2
spec:
storageClassName: "oci-bv"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
---
apiVersion: "weblogic.oracle/v8"
kind: Domain
metadata:
name: essbase1
labels:
weblogic.domainUID: essbase1
spec:
domainUID: essbase1
domainHome: "/u01/config/domains/essbase_domain"
domainHomeSourceType: PersistentVolume
image: oracle/essbase:1.0
clusters:
- clusterName: essbase_cluster
replicas: 2
serverStartState: "RUNNING"
serverPod:
initContainers:
- name: server-init
image: oracle/essbase:1.0
command: [ "/bin/sh" ]
args: [ "rm -rf /u01/tmp/*" ]
volumeMounts:
- name: temp-volume
mountPath: /u01/tmp
volumes:
- name: temp-volume
persistentVolumeClaim:
claimName: $(DOMAIN_UID)-temp-volume-$(SERVER_NAME)
volumeMounts:
- mountPath: /u01/tmp
name: temp-volume
This is harder to tie to the lifecycle and adds a bit more work to support something like dynamic clusters.
I've filed Oracle internal JIRA OWLS-89088 to track this idea.
Would like to be able to define a claim template in the serverPod similar to what is offered by the statefulset resource type. While not critical, this could just simplify the domain resource definition a little bit.