Open nodesocket opened 2 years ago
I haven't used a Helm chart (I sometimes find them to be overkill), but here is the deployment file that I use in my little home lab:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zero-git-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zero-git
labels:
app: zero-git
spec:
selector:
matchLabels:
app: zero-git
serviceName: zero-git
template:
metadata:
labels:
app: zero-git
spec:
containers:
- name: zero-git
image: charmcli/soft-serve:latest
volumeMounts:
- name: zero-git-volume
mountPath: /soft-serve
volumes:
- name: zero-git-volume
persistentVolumeClaim:
claimName: zero-git-pvc
---
apiVersion: v1
kind: Service
metadata:
name: zero-git
spec:
selector:
app: zero-git
type: LoadBalancer
externalTrafficPolicy: Cluster
ports:
- name: ssh
protocol: TCP
port: 22
targetPort: 23231
Note that I'm exposing a load balanced service on port 22 in my environment because I'm making use of MetalLB to provide layer 7 load balancing which is why I can use port 22 without conflicting with ports in use on the K8s node. I'm also using https://longhorn.io to provide the volume to the pod in the statefulset so that if I take down one of the nodes in the cluster the volume will be transparently available to the underlying pod when it launches on another node.
I was running into issues in my deployment if I didn't set the environment variables SOFT_SERVE_PORT
and SOFT_SERVE_BIND_ADDRESS
. The container was giving an error:
2022/03/12 16:02:06 env: parse error on field "Port" of type "int": strconv.ParseInt: parsing "tcp://10.43.76.7:22": invalid syntax
until I set those two environment variables.
Here's my setup:
local-path
StorageClassMetalLB
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: soft-serve-deployment
namespace: soft-serve
labels:
app: soft-serve
spec:
selector:
matchLabels:
app: soft-serve
serviceName: soft-serve
template:
metadata:
labels:
app: soft-serve
spec:
volumes:
- name: git-repo-data
persistentVolumeClaim:
claimName: soft-serve-pvc
containers:
- name: soft-serve
image: docker.io/charmcli/soft-serve:latest
ports:
- containerPort: 23231
env:
- name: SOFT_SERVE_PORT
value: '23231'
- name: SOFT_SERVE_BIND_ADDRESS
value: '0.0.0.0'
volumeMounts:
- name: git-repo-data
mountPath: /soft-serve
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: soft-serve-pvc
namespace: soft-serve
spec:
resources:
requests:
storage: 5Gi
accessModes:
- ReadWriteOnce
storageClassName: local-path
---
apiVersion: v1
kind: Service
metadata:
name: soft-serve
namespace: soft-serve
spec:
type: LoadBalancer
externalTrafficPolicy: Cluster
selector:
app: soft-serve
ports:
- name: ssh
protocol: TCP
port: 22
targetPort: 23231
Hope this helps!
agree that a helm install would be clutch. appreciate the quick start rigzba21 - for anyone that ran that and it didn't work double-check your pvc service class and adjust local-path to an available one shown from kubectl get sc
I see there is support for Docker, any chance there is a Kubernetes Helm chart or one can be created?
I started an initial Helm chart, but haven't had time to thoroughly test it. I'm still working out how to expose the port. Feel free to MR/PR/Fork or send bug reports or ideas.
has anyone gotten this working on a remote cluster? can run soft serve in docker locally no problem, but when I try to deploy to a cluster (no pvc yet, just deployment + svc) I get kex_exchange_identification: read: Connection reset by peer
just a follow up here, my pod seems to be crash looping for some reason (asserted this by adding the promwish middleware and using as a liveness check). this in turn was causing my known hosts to be incorrect, though why it didn't show the normal known hosts warning and instead caused a kex_exchange error I am not sure
I see there is support for Docker, any chance there is a Kubernetes Helm chart or one can be created?