Open zmoog opened 2 weeks ago
Missing setup steps:
brew install kustomize
Clone the repo and deploy on the Kind cluster:
gh repo clone zmoog/go-bender
cd go-bender
make service
$ make dev-load
kind load docker-image zmoog/bender/bender-bot:0.0.1 --name dev
Image: "zmoog/bender/bender-bot:0.0.1" with ID "sha256:294e29669d8f7bc950a984c8faf4c1d7d94eb4d5a5bd7ac358a85a2a39e847b9" not yet present on node "dev-control-plane", loading...
$ make dev-apply
kustomize build zarf/k8s/dev/bender | kubectl apply -f -
namespace/bender-system created
deployment.apps/bender created
kubectl wait pods --namespace=bender-system --selector app=bender --for=condition=Ready --timeout=60s
make dev-status
According to
describe pod bender-8678545685-5ch4w -n bender-system
Events: │············
Type Reason Age From Message │············
---- ------ ---- ---- ------- │············
Warning FailedScheduling 41s (x2 over 6m7s) default-scheduler 0/1 nodes are available: 1 Insufficient cpu. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod. │············
z
kubectl get events -n bender-system
LAST SEEN TYPE REASON OBJECT MESSAGE
3m12s Warning FailedScheduling pod/bender-8678545685-5ch4w 0/1 nodes are available: 1 Insufficient cpu. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
8m38s Normal SuccessfulCreate replicaset/bender-8678545685 Created pod: bender-8678545685-5ch4w
8m38s Normal ScalingReplicaSet deployment/bender Scaled up replica set bender-8678545685 to 1
I reduced the resource requirements:
g diff zarf/k8s/dev/bender/dev-bender-patch-deployment.yaml
diff --git a/zarf/k8s/dev/bender/dev-bender-patch-deployment.yaml b/zarf/k8s/dev/bender/dev-bender-patch-deployment.yaml
index 71188b7..737efb3 100644
--- a/zarf/k8s/dev/bender/dev-bender-patch-deployment.yaml
+++ b/zarf/k8s/dev/bender/dev-bender-patch-deployment.yaml
@@ -26,8 +26,8 @@ spec:
- name: bender-bot
resources:
requests:
- cpu: 1500m
+ cpu: 1000m
memory: 128Mi
limits:
- cpu: 1500m
- memory: 128Mi
\ No newline at end of file
+ cpu: 1000m
+ memory: 128Mi
And finally the pod has started:
$ make dev-status
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
dev-control-plane Ready control-plane 59m v1.31.2 172.18.0.2 <none> Debian GNU/Linux 12 (bookworm) 5.15.0-124-generic containerd://1.7.18
kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 59m <none>
kubectl get pods -o wide --watch --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
bender-system bender-ffdb496df-p5ftv 0/1 CrashLoopBackOff 4 (34s ago) 2m10s 172.18.0.2 dev-control-plane <none> <none>
kube-system coredns-7c65d6cfc9-cjfvv 1/1 Running 0 58m 10.244.0.4 dev-control-plane <none> <none>
kube-system coredns-7c65d6cfc9-zpnmz 1/1 Running 0 58m 10.244.0.2 dev-control-plane <none> <none>
kube-system etcd-dev-control-plane 1/1 Running 0 59m 172.18.0.2 dev-control-plane <none> <none>
kube-system kindnet-j65pc 1/1 Running 0 58m 172.18.0.2 dev-control-plane <none> <none>
kube-system kube-apiserver-dev-control-plane 1/1 Running 0 58m 172.18.0.2 dev-control-plane <none> <none>
kube-system kube-controller-manager-dev-control-plane 1/1 Running 0 58m 172.18.0.2 dev-control-plane <none> <none>
kube-system kube-proxy-msb8j 1/1 Running 0 58m 172.18.0.2 dev-control-plane <none> <none>
kube-system kube-scheduler-dev-control-plane 1/1 Running 0 59m 172.18.0.2 dev-control-plane <none> <none>
local-path-storage local-path-provisioner-57c5987fd4-4q58c 1/1 Running 0 58m 10.244.0.3 dev-control-plane <none> <none>
But now is crashing.
Two problems:
bender-system
namespace.To fix the problems, I did the following:
Added the missing reference to the secret:
env:
- name: DISCORD_TOKEN
valueFrom:
secretKeyRef:
name: discord-token
key: discord-token
Created the secret in the bender-system
namespace:
k create secret generic discord-token --from-literal=discord-token='[redacted]' -n bender-system
And FINALLY, the bot is running free!
I pulled the zmoog/better-commands
branch, rebuilt and deployed and now I can enjoy the latest version:
Why:
Tasks: