Closed balejos closed 4 years ago
Some things come to mind that need to be looked at to make this happen:
We cannot use routes, imagestreams and deploymentconfigs ... they are all openshift objects
Camel-K removes the need for Syndesis to perform the S2I build, it has strategies in place to perform the build itself either via S2I or via Kaniko.
This issue has been automatically marked as stale because it has not had any activity since 90 days. It will be closed if no further activity occurs within 7 days. Thank you for your contributions!
Trying out https://microk8s.io/#get-started
Need to define "plain Kubernetes".
Therefore what is the objective?
- Remove / migrate from Openshift-specific structural elements used in Syndesis to allow for a more-run-anywhere app?
OpenShift specific objects like DeploymentConfig
and Route
and the way we utilize S2I build make Syndesis non portable across any other Kubernetes distribution. I'd start with having a way to install and run Syndesis on Kubernetes. Defaulting to Camel K for running integrations will give us portability, as it supports both plain Kubernetes and OpenShift.
- Allow / test installation & running on Syndesis on a number of different Kubernetes platforms to maximise community participation?
I'd focus on one, minikube is probably the one used most as a developer platform (similar to minishift/crc), running on minikube should be a representative common ground for any Kubernetes. I don't mind giving microk8s a try, but I think we should not spread ourselfs too thinly.
- Where an Openshift feature is considered essential, provide an alternative configuration for a Kubernetes install while retaining the Openshift feature, ie. maintenance of multiple installable configurations?
The approach Camel K took is to have support for both OpenShift and Kubernetes, I think that makes sense. Though I don't think we need to depend on OpenShift specifics too much even when running on OpenShift. What we have, for example, with DeploymentConfig
is caused either us not realizing the Kubernetes alternative (Deployment
) or not having that ability at the time we started.
Gist for guidelines on converting DeploymentConfig to Deployment: https://gist.github.com/bmaupin/d5be3ca882345ff92e8336698230dae0
Interesting issue/discussion of possibility of oc
conversion of Deployment/DeploymentConfig (sadly stale atm)
https://github.com/openshift/origin/issues/16763
Creating an ingress resource -> https://blog.openshift.com/kubernetes-ingress-vs-openshift-route/
Progress with research links
Configured ability to build operator image into docker registry;
Understood that local docker registry is independant of kubernetes registry and s2i build provided in syndesis build scripts builds and adds the image to the openshift registry. This does not happen with kubernetes;
Encountered error concerning locahost defaulting to ipv6 ::1 ip address - results in hang on 'docker push'
syndesis-operator install operator --image 127.0.0.1:32000/syndesis-operator --tag latest
Note: The built-in registry is NOT the same as the image cache available via
microk8.ctr images
. So just because am image was pushed to 127.0.0.1:32000 doesn't mean it will have appeared in the image cache until it is actually used.
The operator has 2 distinct switch points available for custom image/tag combinations a) When building the operator we can change the default image/tag combination b) When running the operator we can change override the default image/tag combination
Gist for guidelines on converting DeploymentConfig to Deployment: https://gist.github.com/bmaupin/d5be3ca882345ff92e8336698230dae0
Once syndesis migrates to camel-k this won't be needed any more as camel-k takes care of generating the right "deployment" depending on the environment (i.e. it takes also into account knative services)
Gist for guidelines on converting DeploymentConfig to Deployment: https://gist.github.com/bmaupin/d5be3ca882345ff92e8336698230dae0
Once syndesis migrates to camel-k this won't be needed any more as camel-k takes care of generating the right "deployment" depending on the environment (i.e. it takes also into account knative services)
Thanks @lburgazzoli. Yes you're right but we do need it at the moment though in converting the other Syndesis DeploymentConfig's, eg operator, syndesis-db. Converted the operator end-of-last-week.
Kubebox -> https://github.com/astefanutti/kubebox
Kubespy -> https://github.com/pulumi/kubespy
A blog on the interesting problems encountered in kubernetes development.
First experiment with an ingress (the kubernetes alternative to OS routes)
Enabling the dashboard and exposing it through an https ingress.
The dashboard is packaged as an addon in microk8s so need to enable it first.
Dashboard is configured with minimal privileges so need to create a service account and bind it to cluster-admin.
apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: dashboard-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
kind: ServiceAccount name: dashboard-admin namespace: kube-system
Once added, can get dashboard-admin token in order to login to the dashboard (token is loooong string)
secret=$(kubectl -n kube-system get secrets | grep dashboard-admin | awk '{print $1}')
kubectl -n kube-system describe secret/${secret} | grep "token:" | awk '{print $2}'
Cannot use the token yet as the URL of the dashboard has yet to be determined.
Several methods to login:
Look at the dashboard service and find its IP, eg. 10.1.37.112. Then check the container port setting in the spec to confirm the exposed port, eg. 8443. Thus, it is possible with the likes of microk8s to access the IP directly in a browser and bring up the dashboard with https://<IP>:8443
This has its limitations because a changing of the service will change this IP and more importantly this IP is internal so not necessarily available.
The ingress can use paths and/or hosts to redirect to alternative services. However, struggled to get anything working with just paths so moved to using a host.
Host is a dns name that maps to the ip address of the endpoint specified in the service, eg. 127.0.0.1. So in this case, simply added an alias to localhost
of kube.dash
to /etc/hosts
.
For TLS/SSL support need to first create a secret containing details of certificate. This requires a couple of steps:
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout kube-self-signed.key -out kube-self-signed.crt -subj "/CN=kube.dash/O=kube.dash"
namespace=kube-system # where dashboard is installed
name=dashboard-secret # name of secret referred to in ingress
kubectl -n $namespace create secret tls $name --cert=cert.crt --key=cert.key
Now can create an ingress resource like this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: dashboard
namespace: kube-system
annotations:
nginx.ingress.kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- kube.dash
secretName: dashboard-secret
rules:
- host: kube.dash
http:
paths:
- path: /
backend:
serviceName: kubernetes-dashboard
servicePort: 8443
If all of this works correctly then navigating to https://kube.dash
will display the dashboard.
Supplemental
...
spec:
containers:
- name: nginx-ingress-microk8s
image: >-
quay.io/kubernetes-ingress-controller/nginx-ingress-controller-amd64:0.25.1
args:
- /nginx-ingress-controller
- '--configmap=$(POD_NAMESPACE)/nginx-load-balancer-microk8s-conf'
- '--publish-status-address=127.0.0.1'
- '--http-port=8080' # Add this argument to change http
- '--https-port=8443' # Add this argument to change https
ports: # Modify the ports that should be exposed
- hostPort: 8080
containerPort: 8080
protocol: TCP
- hostPort: 8443
containerPort: 8443
protocol: TCP
...
Not required but FYI
--enable-ssl-passthrough
.Openshift auto-generates a self-signed key/certificate combo when the service is given the following annotation:
annotations:
service.alpha.openshift.io/serving-cert-secret-name: <name-of-secret-to-be-created>
This is responsible for the syndesis-oauthproxy-tls
secret that is mounted by the syndesis-oauth-proxy.
microk8s basic auth csv format:
password,user,uid,"group1,group2,group3"
Since kubernetes distributions don't tend to come with an authentication/authorization identity-provider, it is necessary to install one then using OpenID Connect tie into it with oauth2_proxy. The latter to be used instead of the openshift oauth-proxy since it is designed to work with openshift.
Useful references for setting up keycloak as provider:
Alternative to keycloak is dex which can act as shim to google or github
Using keycloak in oauth2_proxy
First time syndesis executed on kuberenetes implementation!
--cookie-name
switch which solves the problemSummary of major issues to be addressed:
The {{.Syndesis.RouteHostname}} is blank
Cannot install/use ingresses on minishift since it is only 3.11 and does not support them
Need to generate certicates for the oauth2_proxy in order to use TLS. Openshift does this automatically but of course on kubernetes this is not
The image for oauth2_proxy is quay.io hence a need to modify the build/conf/config.yaml. This needs further work to add in coordinates for specifying the auth provider, client-id & secret
Modify arguments of oauth2_proxy since they need to be broader than the openshift version of oauth_proxy
Update route to be ingress although the difficulty is ensuring this will be backward-compatible
Small changes in code required, including
Platform
attribute in the configuration to act as an if conditionConclusion
I'm super excited about this development stream to port it to plain kubernetes
@phantomjinx good job Is it possible to share a repo containing the modifications that you performed? Thanks
I'd love to support this but we are committed to vanilla Kubernetes on-prem and AWS as a cloud provider.
PR for review -> https://github.com/syndesisio/syndesis/pull/8697
If installation on Kubernetes will be possible. Will there also be a helm chart?
@SvenC56 Up until this point, I've never used helm but can certainly consider it.
Please provide plain old docker images and Kubernetes yaml files. No helm, operator, openshift specific stuff.
Coming from a retro on decreasing the complexity of bringing up the dev environment, and also mentioned in Planning syndesis 2.0 We identified that working towards installing on plain Kubernetes, we would discover the assumptions we made and attract community deployment.
ToDo & Considerations: