portainer / kubernetes-beta

Feedback for the BETA version of Portainer with Kubernetes support
19 stars 1 forks source link

[TOPIC] - k3s #6

Open deviantony opened 4 years ago

deviantony commented 4 years ago

We have not tested the deployment of the Portainer for Kubernetes BETA version in k3s.

Have any feedback about deploying and using the Portainer for Kubernetes BETA version in k3s?

Discuss it here.

WhiteBahamut commented 4 years ago

Just did it today on my RasPi4. Followed https://github.com/portainer/portainer-k8s and changed the image tag. Had to use legacy ip tables for k3s. Load Balancer sample worked for me, did not tried Node Port.

wesselah commented 4 years ago

I did the same on raspberry pi 4 and also needed legacy ip tables . I am not fluid on kubernetes but where do you store the persistant data or do we need to use configmap because when removing portainer we need to enter credentials again and changes are not saved also

WhiteBahamut commented 4 years ago

same problem for me. Not sure if this is a portainer or k3s issue. I start k3s with the --default-local-storage-path [path] option. But this was not respected by k3s or portainer. My workaround is to update the config map as the first step after k3s setup with kubectyl apply -f local-volume.yaml

kind: ConfigMap
apiVersion: v1
metadata:
  name: local-path-config
  namespace: kube-system
data:
  config.json: |-
        {
                "nodePathMap":[
                {
                        "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
                        "paths":["/path/to/local/folder"]
                }
                ]
        }

still, after reboot of the whole host I get the default vaule for local-path-config. But it seems in the background my updated value is used

WhiteBahamut commented 4 years ago

Added an ubuntu worker node to my k3s master. Works as expected. See it in portainer and apps are scaled (if set)

deviantony commented 4 years ago

@wesselah @WhiteBahamut the current manifests for Portainer do not persist the data associated to Portainer (mainly because it depends on the storage provider / storage classes available in your cluster).

If you want to persist the data, you'll have to manually update the manifests to persist the /data container folder.

WhiteBahamut commented 4 years ago

I know, so I extended the manifest to persist /data. k3s comes with a local-path provider by default. Yet it seems to "forget" changes made to the local-path-config. I assume this is a k3s problem, and portainer is not messing around with the local-path-config configmap.

WhiteBahamut commented 4 years ago

Just to follow up. Installing k3s with this command curl -sfL https://get.k3s.io | sh -s - server --default-local-storage-path [your storage path] seems to work for me and k3s remembers the default path after reboot (I want to store the data somewhere else, and not the default path).

Adding a PVC for portainer

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: portainer-pvc
  namespace: portainer
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path
  resources:
    requests:
      storage: 1Gi

and using it

apiVersion: apps/v1
kind: Deployment
metadata:
  name: portainer
  namespace: portainer
spec:
  selector:
    matchLabels:
      app: app-portainer
  template:
    metadata:
      labels:
        app: app-portainer
    spec:
      serviceAccountName: portainer-sa-clusteradmin
      containers:
      - name: portainer
        image: portainer/portainer-k8s-beta:linux-arm
        imagePullPolicy: Always
        ports:
        - containerPort: 9000
          protocol: TCP
        volumeMounts:
        - name: data
          mountPath: /data
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: portainer-pvc

If you remove portainer, and its PVC it will lose its data ofcourse.

deviantony commented 4 years ago

Thanks for the update @WhiteBahamut

wesselah commented 4 years ago

@WhiteBahamut : does the same apply to the agent command when installing or only on the server?

WhiteBahamut commented 4 years ago

I am not a k3s expert, but as far as I understand the path provided to --default-local-storage-path must be available on all cluster nodes. On the agent you dont need to configure it again (also no option for it). In my quick tests services which had a local-path pvc pointing to a folder not present on the node failed to start up.

P.s: we might end up abusing this issue ;-)

wesselah commented 4 years ago

Okee i will test with a nfs share and see what happens

deviantony commented 4 years ago

@WhiteBahamut @wesselah it's ok for you to discuss in this issue, it was made for that :-)

All of your feedback is welcome !

wesselah commented 4 years ago

@WhiteBahamut do i need to declare a pv first because the pvc status keeps pending with your example

wesselah commented 4 years ago

my fault it was pending because nothing was claiming it now it is bound and portainer is working on the nfs share

WhiteBahamut commented 4 years ago

yeha, might took a tiny bit. I extendet the portainer xaml to include the pvc:

apiVersion: v1
kind: Namespace
metadata:
  name: portainer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: portainer-pvc
  namespace: portainer
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: Namespace
metadata:
  name: portainer
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: portainer-sa-clusteradmin
  namespace: portainer
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: portainer-crb-clusteradmin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: portainer-sa-clusteradmin
  namespace: portainer
---
apiVersion: v1
kind: Service
metadata:
  name: portainer
  namespace: portainer
spec:
  type: LoadBalancer
  selector:
    app: app-portainer
  ports:
    - name: http
      protocol: TCP
      port: 9000
      targetPort: 9000
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: portainer
  namespace: portainer
spec:
  selector:
    matchLabels:
      app: app-portainer
  template:
    metadata:
      labels:
        app: app-portainer
    spec:
      serviceAccountName: portainer-sa-clusteradmin
      containers:
      - name: portainer
        image: portainer/portainer-k8s-beta:linux-arm
        imagePullPolicy: Always
        ports:
        - containerPort: 9000
          protocol: TCP
        volumeMounts:
        - name: data
          mountPath: /data
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: portainer-pvc

It ditched the edge part, as I dont use it. I think 1Gi is too much, but this can be adjusted as you like. Never checked how much portainer actually consumes in my setups. Should only be some MB.

jhole89 commented 4 years ago

Deployed on Civo k3s without much issue and seems to work fine (didnt bother with PVC yet). Translated the portainer.yaml into pure terraform though and deployed it using terraform-provider-kubernetes, defn here

si458 commented 4 years ago

setup portainer on k3s with traefik as ingress removed, no issues ive then setup a minio pod with service as nodeport, no issues, but when i go into the port mappings in the application list, its only showing portainer? i dont see my minio nodeport?

deviantony commented 4 years ago

Hi @si458

Thanks for the feedback. It will depend on how you actually deployed your minio application, if you deployed it as a pod directly then no you won't be able to see it in Portainer with the current version (although we're working on that).

The current version only shows applications that are deployed via a Deployment, a StatefulSet or a DaemonSet.

si458 commented 4 years ago

Hi @deviantony I deployed it via a deployment and service So I’m confused now why it doesn’t show? link to my config yaml file which i have set as a loadbalancer but that doesnt show up in portainer either? but the port works fine? https://pastebin.com/raw/DY744adx im a beginner to kubernetes so please do correct me if i have made any mistakes in my file! 👍

deviantony commented 4 years ago

Thanks for sharing your deployment file @si458 we'll see if we can reproduce it.

si458 commented 4 years ago

@deviantony did you have any luck at all with why the ports aren’t being displayed?

deviantony commented 4 years ago

@si458 we've reproduced your bug yes, and we're investigating it.

si458 commented 4 years ago

@deviantony any luck at all with the node port/load balancer not being shown in portainer yet? ive just noticed the docker image has been updated and now my panel shows 1.0.0-k8s-rc but im still having this issue?

deviantony commented 4 years ago

@si458 we've not fixed this yet, it's probably gonna be available after the release candidate (when we merge it into the Portainer core).

si458 commented 4 years ago

@deviantony i have just noticed if i use a loadbalancer and look at my application it actually shows

Load balancer status: available
Load balancer IP address: 192.168.168.136
Container port | Load balancer port
9000 | 9000

and its the same if i use nodeport as well

Container port | Cluster node port
9000 | 30007

so its just the port mappings tab that never shows them at all

deviantony commented 4 years ago

@si458 we've reproduced the bug and fixed it in #3990, this will be available in our 2.0 release.