vigeeking / homeAutomation

My goal is to create a pipeline that is built exclusively with tools I either already know, or am only learning because they provide added value to the project
https://github.com/vigeeking/homeAutomation
0 stars 0 forks source link

get microk8s set up #73

Closed vigeeking closed 3 years ago

vigeeking commented 4 years ago

I've been tinkering around with this one long enough as part of the media replication story (issue #3 ) that it really should be it's own story. I've worked with k8s in isolation before, but it's always been as a very narrow "do this thing" approach. I had originally planned on doing k8s the hard way (https://github.com/kelseyhightower/kubernetes-the-hard-way) but after talking with Justin, I think I'm just gonna stop off briefly to make sure I understand each of the individual components. But I wanted to keep a log of what I had done in case I run into any of these problems again, or if I ever wanted to brush up. This task is done when I have launched my first application from a helm chart as part of my pipeline.

vigeeking commented 4 years ago

Good comic from Justin, I had read this before I knew anything about k8s (years ago) but it makes a lot more sense now, and still likely serves as a good reference point: https://cloud.google.com/kubernetes-engine/kubernetes-comic

vigeeking commented 4 years ago

One of the other things that I'm still struggling some with is ingress ports and network sharing within k8s. This seems like a very good cheat sheet to use: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#port-forward

vigeeking commented 4 years ago

Got everything set up, looks like I hadn't set up my microk8s config correctly. I was getting k8s not found errors, but they seem to have been resolved by running this: cat $HOME/.kube/config | grep microk8s , found from https://webcloudpower.com/use-kubernetics-locally-with-microk8s/ Now having some problems with helm, but at least progress is being made finally.

vigeeking commented 4 years ago

Got the helm chart for hassio deployed, ran into some problems I wanted to flush out here. When I installed the helm chart, I got the message:

root@vigeeking:/home/tim# helm install hassio billimek/home-assistant --version 1.1.0 NAME: hassio LAST DEPLOYED: Tue Aug 18 11:42:29 2020 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None NOTES:

  1. Get the application URL by running these commands: export POD_NAME=$(kubectl get pods --namespace default -l "app=home-assistant,release=hassio" -o jsonpath="{.items[0].metadata.name}") echo "Visit http://127.0.0.1:8080 to use your application" kubectl port-forward $POD_NAME 8080:80

The export POD_name failed, and I don't know why. I was able to get the pod name manually (kubectl get pods), but now the pod is listed as pending, which means I can't forward. I looked into the issue further. I am going to be disposing of this instance pretty soon, so I am not going to sanitize this output (but I would normally sanitice name and anything that looks like hash values): root@vigeeking:/home/tim# kubectl describe pods hassio-home-assistant-d89cb6fc8-5l9jh Name: hassio-home-assistant-d89cb6fc8-5l9jh Namespace: default Priority: 0 Node: Labels: app.kubernetes.io/instance=hassio app.kubernetes.io/name=home-assistant pod-template-hash=d89cb6fc8 Annotations: Status: Pending IP:
IPs: Controlled By: ReplicaSet/hassio-home-assistant-d89cb6fc8 Containers: home-assistant: Image: homeassistant/home-assistant:0.113.3 Port: 8123/TCP Host Port: 0/TCP Liveness: http-get http://:api/ delay=60s timeout=10s period=10s #success=1 #failure=5 Readiness: http-get http://:api/ delay=60s timeout=10s period=10s #success=1 #failure=5 Environment: Mounts: /config from config (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-fd2g6 (ro) Conditions: Type Status PodScheduled False Volumes: config: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: hassio-home-assistant ReadOnly: false default-token-fd2g6: Type: Secret (a volume populated by a Secret) SecretName: default-token-fd2g6 Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message


Warning FailedScheduling 3m1s (x1066 over 26h) default-scheduler running "VolumeBinding" filter plugin for pod "hassio-home-assistant-d89cb6fc8-5l9jh": pod has unbound immediate PersistentVolumeClaims

It looks like this may be a known issue for microk8s, and I will next be trying this workaround, since it appears to be a persistent volume claim issue: https://github.com/kubernetes/minikube/issues/7828#issuecomment-620662496

jwhollingsworth commented 4 years ago

If you didn't do it, you need to enable storage: "microk8s enable storage"

Also, that export looks like it didn't work, because the pod doesn't have the labels it is filtering on. Seems like a bug in that chart's NOTE.txt file.

I assume that is what this is doing: -l "app=home-assistant,release=hassio"

The Pod labels you show are:

Labels: app.kubernetes.io/instance=hassio app.kubernetes.io/name=home-assistant

vigeeking commented 4 years ago

Good call on the enabling storage, ty for that. For the other, I believe it was failing because the pod kept crashing because of the volume issue, which then ties back into the enabling storage. I'm going to let it sit for a bit and see if it self heals, but if not I'll get more info up here within the next hour or two.

vigeeking commented 3 years ago

I'm still having storage issues. For whatever reason, I just can't get storage in helm working the way I'd like. I've kind of run out of ideas for this one, so I'm going to close it. If need be I can reopen this one, otherwise I'll assume I've passed any batons on so there is no reason to leave this open. I think this is a quantum issue.

kamyar commented 3 years ago

export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=home-assistant,app.kubernetes.io/instance=home-assistant" -o jsonpath="{.items[0].metadata.name}") worked for me it seems to have been fixed in the post help output message I got