crossplane / oam-kubernetes-runtime

A set of libraries for building OAM runtimes
Apache License 2.0
277 stars 80 forks source link

ContainerizedWorkload Add SubWorkloadType #13

Open allenhaozi opened 4 years ago

allenhaozi commented 4 years ago

current struct:

// A ContainerizedWorkloadSpec defines the desired state of a
// ContainerizedWorkload.
type ContainerizedWorkloadSpec struct {
    // OperatingSystem required by this workload.
    // +kubebuilder:validation:Enum=linux;windows
    // +optional
    OperatingSystem *OperatingSystem `json:"osType,omitempty"`

    // CPUArchitecture required by this workload.
    // +kubebuilder:validation:Enum=i386;amd64;arm;arm64
    // +optional
    CPUArchitecture *CPUArchitecture `json:"arch,omitempty"`

    // Containers of which this workload consists.
    Containers []Container `json:"containers"`
}

add a new field WorkloadSubType

// A ContainerizedWorkloadSpec defines the desired state of a
// ContainerizedWorkload.
type ContainerizedWorkloadSpec struct {
    // OperatingSystem required by this workload.
    // +kubebuilder:validation:Enum=linux;windows
    // +optional
    OperatingSystem *OperatingSystem `json:"osType,omitempty"`

    // CPUArchitecture required by this workload.
    // +kubebuilder:validation:Enum=i386;amd64;arm;arm64
    // +optional
    CPUArchitecture *CPUArchitecture `json:"arch,omitempty"`

         // WorkLoadSubType required by this workload.
    // +kubebuilder:validation:Enum=server;task;crontask;statefulset;daemonset
    // +optional
    WorkloadSubType string `json:"subType,omitempty"`         

    // Containers of which this workload consists.
    Containers []Container `json:"containers"`
}

background

wonderflow commented 4 years ago

I think what @allenhaozi needs is a way that can distinguish different K8s native workloads from containerized workload.

Now containerizedWorkload is translated to K8s Deployment by default, but in fact, K8s Job, Statefulset and some other native workloads are all containerized. We don't have such mechanism to let user choose.

wonderflow commented 4 years ago

I suggest to use a new workload to fit other K8s resource ,for example , use Job Workload to represent K8s Job:

apiVersion: core.oam.dev/v1alpha2
kind: Component
metadata:
  name: ai-job
  annotations:
    version: v1.0.0
    description: "A Job"
spec:
  workload:
    apiVersion: batch/v1
    kind: Job
    spec:
      template:
        spec:
          containers:
            - name: pi
                image: perl
                command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
wonderflow commented 4 years ago

What @allenhaozi suggests in this issue is also a way that let core workload(containerized workload) to cover more scenarios. This is also one of advantages in OAM v1alpha1, that we define 6 core workload types (Server/Task/Worker and their singletons).

In v1alpha2, we enable more extendibility but can't ignore the standard parts.

Another option is to add batch/v1.Job and some of other native workloads to OAM core workload.

allenhaozi commented 4 years ago

What @allenhaozi suggests in this issue is also a way that let core workload(containerized workload) to cover more scenarios. This is also one of advantages in OAM v1alpha1, that we define 6 core workload types (Server/Task/Worker and their singletons).

In v1alpha2, we enable more extendibility but can't ignore the standard parts.

Another option is to add batch/v1.Job and some of other native workloads to OAM core workload.

Yes, this is what I want to express, looking forward to a powerful OAM core workload

ryanzhang-oss commented 4 years ago

@allenhaozi Thank you for raising the issue.

Here is our current thinking

  1. We should add more types of core/standard workloads. The general rule of thumb is that if two workloads require very different specs (ie. Job vs deployment) then they are probably two different types of workloads
  2. To address the problem of the lack of Server/Worker/Singletons in v1alpha2 spec vs. v1alpha1. We proposed to add "policy" to a component to express runtime characteristics/requirements of this component. Please see the issue in OAM spec for a detailed explanation.
wonderflow commented 4 years ago

@allenhaozi Can this conclusion solve your issue? By the way, which kind of policy way do you prefer?

allenhaozi commented 4 years ago

@allenhaozi Can this conclusion solve your issue? By the way, which kind of policy way do you prefer?

Yes, policy can cover my case, Internally, I implemented two workloads, ServerWorkload and TaskWorkload

// ServerWorkloadSpec defines the desired state of ServerWorkload
type ServerWorkloadSpec struct {
    // OperatingSystem required by this workload.
    // +kubebuilder:validation:Enum=linux;windows
    // +optional
    OperatingSystem *oamcorev1alpha2.OperatingSystem `json:"osType,omitempty"`

    // CPUArchitecture required by this workload.
    // +kubebuilder:validation:Enum=i386;amd64;arm;arm64
    // +optional
    CPUArchitecture *oamcorev1alpha2.CPUArchitecture `json:"arch,omitempty"`

    // WorkLoadSubType required by this workload.
    // +kubebuilder:validation:Enum=server;statefulset;daemonset
    // +optional
    WorkloadSubType register.WorkloadSubType `json:"subType,omitempty"`

    // ReplicaCount required by this workload.
    // +optional
    ReplicaCount int32 `json:"replicaCount"`

    // Containers of which this workload consists.
    Containers []oamcorev1alpha2.Container `json:"containers"`
}
// TaskWorkloadSpec defines the desired state of TaskWorkload
type TaskWorkloadSpec struct {
    // OperatingSystem required by this workload.
    // +kubebuilder:validation:Enum=linux;windows
    // +optional
    OperatingSystem *oamcorev1alpha2.OperatingSystem `json:"osType,omitempty"`

    // CPUArchitecture required by this workload.
    // +kubebuilder:validation:Enum=i386;amd64;arm;arm64
    // +optional
    CPUArchitecture *oamcorev1alpha2.CPUArchitecture `json:"arch,omitempty"`

    // Specifies the maximum desired number of pods the job should
    // run at any given time. The actual number of pods running in steady state will
    // be less than this number when ((.spec.completions - .status.successful) < .spec.parallelism),
    // i.e. when the work left to do is less than max parallelism.
    // More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
    // +optional
    Parallelism *int32 `json:"parallelism,omitempty"`

    // Specifies the desired number of successfully finished pods the
    // job should be run with.  Setting to nil means that the success of any
    // pod signals the success of all pods, and allows parallelism to have any positive
    // value.  Setting to 1 means that parallelism is limited to 1 and the success of that
    // pod signals the success of the job.
    // More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
    // +optional
    Completions *int32 `json:"completions,omitempty"`

    // The schedule in Cron format, see https://en.wikipedia.org/wiki/Cron.
    // +kubebuilder:validation:MinLength=0
    // +optional
    Schedule string `json:"schedule,omitempty"`

    // Containers of which this workload consists.
    Containers []oamcorev1alpha2.Container `json:"containers"`
}

ServerWorkload for online, TaskWorkload for offline I am validating these two workloads

ryanzhang-oss commented 4 years ago

Here is the policy design https://github.com/crossplane/oam-kubernetes-runtime/pull/33

ryanzhang-oss commented 4 years ago

Yes, policy can cover my case, Internally, I implemented two workloads, ServerWorkload and TaskWorkload

I think we don't really need ServerWorkload, the policy #33 design has an example of how to create an workload that is internet accessible. We will also introduce a ServiceTrait which will create a Service that applies to a certain workload.

As for TaskWorkload, it seems that it's mostly a copy of K8s Job object. We actually does support user bring in native K8s resources directly to OAM via workloadDefinition. I wonder if you see any advantage of using the TaskWorkload instead of a native K8s Job?

resouer commented 4 years ago

@allenhaozi Hi, it seems community is really interested in this feature, so we create a new issue to track it: #211

Feel free to comment/input there! Could you please close this one because of duplication?