cloud-ark / kubeplus

Kubernetes Operator for multi-instance multi-tenancy
https://cloudark.io/
Apache License 2.0
662 stars 82 forks source link

Publish multiarch Images #1100

Open enyachoke opened 1 year ago

enyachoke commented 1 year ago

I want to try out kubeplus in my homelab as ground work for a future project. Currently, my lab only has arm64 devices meaning I can't use official images. This leaves me with 2 options

devdattakulkarni commented 1 year ago

For each KubePlus container, we provide a build file.

Each build file has two options - build 'latest' image or build a 'versioned' image. For building a versioned image, the build script reads the last entry from 'versions.txt' in the corresponding directory. Modify the build file to refer to your container registry to push the images.

Once you build your images, make sure you update this file with the appropriate image tags.

You can then deploy KubePlus as follows (from the kubeplus folder) helm install kubeplus ./deploy/kubeplus-chart --kubeconfig=kubeplus-saas-provider.json

For troubleshooting, you can check logs for individual containers, like so: kubectl logs <kubeplus-pod> -c crd-hook; kubectl logs <kubeplus-pod> -c helmer; kubectl exec -it <kubeplus-pod> -c kubeconfiggenerator /bin/bash; tail -100 /root/kubeconfiggenerator.log

enyachoke commented 1 year ago

Thanks for the guidance. I have now built the images and I will try to deploy again tonight. See https://github.com/cloud-ark/kubeplus/pull/1103 for what I am shooting for. Making sure that the binaries used in the images are also built using a docker build stage. I have seen the build scripts but I am curious about how they are built and published. Is there a CI process or they are built and published manually?

devdattakulkarni commented 1 year ago

I quickly scanned through that PR and have left one comment. I will go through it more closely over the weekend.

Actually we don't have a CI process yet. Currently images are built manually. It should be straightforward to convert the manual process into build automation.

Btw, I wanted to mention that we have a Vagrant environment in which you will be able to try KubePlus without having to build any images. A Vagrantfile is provided in the repo root. The detail steps are available here

enyachoke commented 1 year ago

I have actually been able to test it on a cloud K8s a few months ago. But currently, I have a home lab cluster complete with DNS, storage and TLS. Also, my laptop is arm64 so not sure if the vagrant setup will work.

Actually we don't have a CI process yet. Currently, images are built manually. It should be straightforward to convert the manual process into build automation. I will be happy to push ahead with the PR to put in place a CI process if it is okay.

devdattakulkarni commented 1 year ago

I have actually been able to test it on a cloud K8s a few months ago.

Got it.

But currently, I have a home lab cluster complete with DNS, storage and TLS. Also, my laptop is arm64 so not sure if the vagrant setup will work.

Actually we don't have a CI process yet. Currently, images are built manually. It should be straightforward to convert the manual process into build automation. I will be happy to push ahead with the PR to put in place a CI process if it is okay.

That will be fantastic!!

We had some early work done with travis primarily to run tests. But it is not being used currently.

devdattakulkarni commented 1 year ago

@enyachoke

I have pushed a fairly big change just now. It fixes several issues. In your setup, can you download the latest KubePlus chart (3.0.16) and see how it is looking?

enyachoke commented 1 year ago

@devdattakulkarni I pulled the changes and I was able to deploy https://github.com/enyachoke/helm-charts/tree/main/charts/whoami. Noticed that the chart values have to be set for example you can't just use {} for something like annotations so I will have to set all the values in the chart. Like here for example https://github.com/enyachoke/helm-charts/blob/7702bcd978ab5fa2151af9012b9b2e7337a2ff0f/charts/whoami/values.yaml#L19-L20 I was hoping there is a way I don't have to set these annotations and be able to change them when setting up the service. Maybe asking for too much though 😃

devdattakulkarni commented 1 year ago

@enyachoke Yay! Glad to know you were able to deploy your application. What does the application do?

Just to understand what you are asking.. You are saying, rather than do annotations: cert-manager.io/cluster-issuer: "" traefik.ingress.kubernetes.io/router.middlewares: ""

you want to leave the values.yaml as annotations: {}

and then specify these during instance creation in the spec.

What error do you get if you leave the annotations as a empty dictionary?

Generally this seems like a fair requirement. If you share the error, I will take a look at how this can be supported.

devdattakulkarni commented 1 year ago

Btw, I haven't had a chance to go through the build changes yet. I am in middle of working on https://github.com/cloud-ark/kubeplus/issues/1091 As soon as that is done, I will go through the build setup changes.

enyachoke commented 1 year ago

@enyachoke Yay! Glad to know you were able to deploy your application. What does the application do?

Nothing this is just a simple whoami chart to test that it works. The real application I intend to deploy is an EMR https://github.com/ozone-his/ozonepro-docker it's still a few months away from being ready.

enyachoke commented 1 year ago

Just to understand what you are asking.. You are saying, rather than do annotations: cert-manager.io/cluster-issuer: "" traefik.ingress.kubernetes.io/router.middlewares: ""

you want to leave the values.yaml as annotations: {}

and then specify these during instance creation in the spec.

What error do you get if you leave the annotations as a empty dictionary?

Generally this seems like a fair requirement. If you share the error, I will take a look at how this can be supported.

Yeah, I want to be able to leave objects like annotations: {} in the values file and provide them in spec at creation. Currently, I noticed They will fail validation like so

Error from server (BadRequest): error when creating "whoami-service.yaml": WhoamiService in version "v1alpha1" cannot be handled as a WhoamiService: strict decoding error: unknown field "spec.ingress.annotations.cert-manager.io/cluster-issuer", unknown field "spec.ingress.annotations.traefik.ingress.kubernetes.io/router.middlewares"
enyachoke commented 1 year ago
image
enyachoke commented 1 year ago

To replicate try this composition

apiVersion: workflows.kubeplus/v1alpha1
kind: ResourceComposition
metadata:
  name: whoami-resource-composition
spec:
  # newResource defines the new CRD to be installed define a workflow.
  newResource:
    resource:
      kind: WhoamiService
      group: platformapi.kubeplus
      version: v1alpha1
      plural: whoamiservices
    # URL of the Helm chart that contains Kubernetes resources that represent a workflow.
    chartURL: https://github.com/enyachoke/helm-charts/releases/download/whoami-0.0.3/whoami-0.0.3.tgz
    chartName: whoami
  # respolicy defines the resource policy to be applied to instances of the specified custom resource.
  respolicy:
    apiVersion: workflows.kubeplus/v1alpha1
    kind: ResourcePolicy
    metadata:
      name: whoami-service-policy
    spec:
      resource:
        kind: WhoamiService
        group: platformapi.kubeplus
        version: v1alpha1
      policy:
        # Add following requests and limits for the first container of all the Pods that are related via
        # owner reference relationship to instances of resources specified above.
        podconfig:
          limits:
            cpu: 200m
            memory: 2Gi
          requests:
            cpu: 100m
            memory: 1Gi
  # resmonitor identifies the resource instances that should be monitored for CPU/Memory/Storage.
  # All the Pods that are related to the resource instance through either ownerReference relationship, or all the relationships
  # (ownerReference, label, annotation, spec properties) are considered in calculating the statistics.
  # The generated output is in Prometheus format.
  resmonitor:
    apiVersion: workflows.kubeplus/v1alpha1
    kind: ResourceMonitor
    metadata:
      name: whoami-service-monitor
    spec:
      resource:
        kind: WhoamiService
        group: platformapi.kubeplus
        version: v1alpha1
      # This attribute indicates that Pods that are reachable through all the relationships should be used
      # as part of calculating the monitoring statistics.
      monitorRelationships: all

And this service

apiVersion: platformapi.kubeplus/v1alpha1
kind: WhoamiService
metadata:
  name: whoami
spec:
  # Default value for namespace.
  ingress:
    enabled: true
    annotations:
      cert-manager.io/cluster-issuer: "letsencrypt-prod"
      traefik.ingress.kubernetes.io/router.middlewares: traefik-redirect@kubernetescrd
    path: /
    hosts:
      - whoami.emmanuelnyachoke.casa
    tls:
    - hosts:
        - whoami.emmanuelnyachoke.casa
      secretName: tls-whoami-ingress-http
enyachoke commented 1 year ago

Btw, I haven't had a chance to go through the build changes yet. I am in middle of working on #1091 As soon as that is done, I will go through the build setup changes.

No pressure I will keep it up to date with the new changes. I will be testing more complex deployments over the weekend and report any issues I find.

devdattakulkarni commented 1 year ago

No pressure I will keep it up to date with the new changes. I will be testing more complex deployments over the weekend and report any issues I find.

Thanks for sharing the ResourceComposition and the instance. That will be useful.

Yes, please try with complex charts and report any issues that you run into.

We tried all the charts from Bitnami repository recently. I have been working my way through the various issues that got uncovered. I will be curious to learn how KubePlus does with other complex charts.