kubernetes-sigs / application

Application metadata descriptor CRD
Apache License 2.0
513 stars 169 forks source link

[Proposition] Extend the componentGroupKind to define a Component's kind #68

Closed cmoulliard closed 4 years ago

cmoulliard commented 6 years ago

The ApplicationSpec type includes the field spec.componentKinds to group under a name, related kubernetes resources such as Service, StatefulSet, ConfigMap, Secret ... describing globally what the application is composed

// ApplicationSpec defines the specification for an Application.
type ApplicationSpec struct {
    // ComponentGroupKinds is a list of Kinds for Application's components (e.g. Deployments, Pods, Services, CRDs). It
    // can be used in conjunction with the Application's Selector to list or watch the Applications components.
    ComponentGroupKinds []metav1.GroupKind `json:"componentKinds,omitempty"`

If we use this Application custom resource to install/configure the environment on kubernetes to deploy the resources needed using a controller or operator, then it is important to have also a specialised type able to :

Example: As a user, I would like to install a Spring Boot application using the version 1.5.15 of the framework and would like to access it externally using a route. The default port of the service is 8080. To convert this requirement into a component's type, then the following object could be created

apiVersion: component.k8s.io/v1alpha1
kind: Component
metadata:
  name: my-spring-boot
spec:
  deployment: innerloop
  packaging: jar
  type: application
  runtime: spring-boot
  version: 1.5.15
  exposeService: true

The advantage to have such component custom resource is that we could be able with a UI or CLI to display the information in a more readable way

kubectl application describe

NAME                Category      Type               Version       Source       Visible Externally
payment-frontend    runtime       nodejs             0.8           local        yes
payment-backend     runtime       spring-boot        1.5.15        binary       no
payment-database    service       db-postgresql-apb  dev                        no

Component's type proposition

type ComponentSpec struct {
    // The name represents a human readable string describing from a business perspective what this component is related to
    // Example : payment-frontend, retail-backend
    Name string
    // The packagingMode refers to the archive file's type which has been used to package the code
    // Example : jar, war, ...
    PackagingMode string
    // The type is related to how the component is installed, as a pod, job, statefulset
    Type string
    // DeploymentMode indicates the strategy to be adopted to install the resources into a namespace
    // and next to create a pod. 2 strategies are currently supported; inner and outer loop
    // where outer loop refers to a build of the code and the packaging of the application into a container's image
    // while the inner loop will install a pod's running a supervisord daemon used to trigger actions such as : assemble, run, ...
    DeploymentMode string `json:"deployment,omitempty"`
    // Runtime is the framework used to start within the container the application
    // It corresponds to one of the following values: spring-boot, vertx, tornthail, nodejs
    Runtime string `json:"runtime,omitempty"`
    // To indicate if we want to expose the service out side of the cluster as a route
    ExposeService bool `json:"exposeService,omitempty"`
    // Cpu is the cpu to be assigned to the pod's running the application
    Cpu string `json:"cpu,omitempty"`
    // Cpu is the memory to be assigned to the pod's running the application
    Memory string `json:"memory,omitempty"`
    // Port is the HTTP/TCP port number used within the pod by the runtime
    Port int32 `json:"port,omitempty"`
    // The storage allows to specify the capacity and mode of the volume (ReadWrite) to be mounted for the pod
    Storage Storage `json:"storage,omitempty"`
    // The list of the images created according to the DeploymentMode to install the loop
    Images []Image `json:"image,omitempty"`
    // Array of env variables containing extra/additional info to be used to configure the runtime
    Envs []Env `json:"env,omitempty"`
    // List of services consumed by the runtime and created as service instance from a Service Catalog
    Services []Service
    // The features represents a capability that it is required to have, to install to allow the component
    // to operate with by example a Prometheus backend system to collect metrics, an OpenTracing datastore
    // to centralize the traces/logs of the runtime, to deploy a servicemesh, ...
    Features []Feature
}
ant31 commented 6 years ago

we use this Application custom resource to install/configure

Installation configuration is part of the container build. On the runtime, the initialization and execution of the container are already declared in the workload object (Deployment, DaemonSet, Job...). The scope of the Application is to aggregate kubernetes resources associated with one application, what's going on inside the container and how it's built is not part of the proposal.

cmoulliard commented 6 years ago

The scope of the Application is to aggregate kubernetes

This is also the goal of my proposition excepted that I suggest to have a dedicated custom resource describing using a more human/readible resource, what are the components composing my application.

When you design/develop a microservices's application as an architect, you will then describe what are the different systems, part of your application that ultimately we have to install deploy on kubernetes/openshift.

By adopting this component custom resource, we can then decouple the technical k8s resources (pod, service, serviceaccount, configmap, secret, replicaset,...) to be created from the definition of the application itself and delegate to a controller/operator, the responsibility to translate the info provided such as runtime, cpu, memory, port, ... into the final k8s resource to be created.

Example :

High Level definition

Application
  component1 : spring boot, port 9090, env : SPRING_PROFILES_ACTIVE=my-cool-db
  component2 : nodejs, port 8080
  service1 : postgresqldb (from service catalog)

Converted by the controller/operator into

Application AND resources associated (for garbage collection) :  
  component1 
    deployment, replicaset, pod where port = 9090, service where port = HTTP and 9090, pod including env var with value SPRING_PROFILES_ACTIVE=my-cool-db ...
  component2 
    deployment, replicaset, pod where port = 8080, service where port = HTTP and 8080, ...
  service1 
    serviceInstance, secret, serviceInstance, updating the deployment to add EnvFrom
cmoulliard commented 6 years ago

@mattfarina @kow3ns @prydonius WDYT about my proposition ?

cmoulliard commented 6 years ago

To support the Component CRD's approach, I have created this demo project where we install 2 microservices or components and consume a service

https://github.com/snowdrop/component-operator-demo#introduction

WDYT @mattfarina @kow3ns @prydonius

mattfarina commented 5 years ago

I like to keep a separation of concerns.

Information about how an image was built would be better placed on the image itself. Maybe as an annotation. No matter where that image is run this information would be available.

If something should be exported this will show up in a Service. The space needed will be in an existing object. Why would we add a component to record it a second time?

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 5 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 5 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 5 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/application/issues/68#issuecomment-515243482): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
cmoulliard commented 4 years ago

/reopen

k8s-ci-robot commented 4 years ago

@cmoulliard: Reopened this issue.

In response to [this](https://github.com/kubernetes-sigs/application/issues/68#issuecomment-588974657): >/reopen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
cmoulliard commented 4 years ago

/remove-lifecycle rotten

cmoulliard commented 4 years ago

FYI: The Component API Spec has been moved to this project : https://github.com/halkyonio/api/blob/master/component/v1beta1/types.go#L50-L74 and is currently supported by this operator : https://github.com/halkyonio/operator

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 4 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 4 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 4 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/application/issues/68#issuecomment-660644152): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.