kyma-incubator / local-kyma

Local installation on k3d cluster
14 stars 10 forks source link

Windows version #7

Closed VishnAndr closed 4 years ago

VishnAndr commented 4 years ago

Hi,

Trying to make the same or similar script working for WSL2 with docker desktop for Windows. I succeed with the installation only once. With exactly the same script (the previous version for k3d 1.x, not with k3d 3.0). With the successful installation (apart of the certificate), all pods looked healthy. Except I didn't figure out what to do with the routing and hence Kyma console was not accessible.

But most of the time it just goes nowhere. What I mean is: during installation, it starts well but then loses the connection to cluster. And any command (like for example get all pods) is timing out.

Any plans to make it happen for WSL2? Or is it not the scope for PoC? Or maybe anyone has any ideas how theoretically make it work with WSL2?

pbochynski commented 4 years ago

Hi Andrei, We plan to add it to kyma CLI and make it working also on Windows. The only problem is that most Kyma contributors use Mac or Linux machines. But I am glad you tried. If you lost the connection later probably the cluster didn't get enough resources. Can you check how much memory and CPU have you allocated to docker? I have 4 CPU and 8GB RAM - which is enough. You can try also to remove some components from the script to check if you can just login into Kyma Console. You can safely comment out these lines (10 components) and still be able to log in:

helm upgrade -i rafter resources/rafter --set $OVERRIDES -n kyma-system &
helm upgrade -i service-catalog resources/service-catalog --set $OVERRIDES -n kyma-system &
helm upgrade -i service-catalog-addons resources/service-catalog-addons --set $OVERRIDES -n kyma-system &
helm upgrade -i logging resources/logging --set $OVERRIDES -n kyma-system &
helm upgrade -i knative-serving resources/knative-serving --set $OVERRIDES -n knative-serving &
helm upgrade -i knative-eventing resources/knative-eventing -n knative-eventing &
helm upgrade -i application-connector resources/application-connector --set $OVERRIDES -n kyma-integration &
helm upgrade -i knative-provisioner-natss resources/knative-provisioner-natss -n knative-eventing &
helm upgrade -i nats-streaming resources/nats-streaming -n natss &
helm upgrade -i event-sources resources/event-sources -n kyma-system &
VishnAndr commented 4 years ago

Mac or Linux machines

That's why I'm trying with WSL2. Not pure Windows. According to docker docs:

Docker Desktop uses the dynamic memory allocation feature in WSL 2 to greatly improve the resource consumption. This means, Docker Desktop only uses the required amount of CPU and memory resources it needs, while enabling CPU and memory-intensive tasks such as building a container to run much faster.

I see my ubuntu distro has 24GB allocated and 4 CPUs. Will try to mock around with components. Thanks for the suggestion! Will keep you posted if making any progress.

PS: Maybe I was not clear: it's losing the connection during the installation.

VishnAndr commented 4 years ago

Yeap, it did work. First, I think, k3d v3.0.0 made some difference.

Secondly, I tried your suggestion of removing 10 components. It helped and kyma started with no issues. I checked numerous times and the outcome was pretty stable: installation with no issues within 4.5 minutes.

Then I started to add components one by one back trying to understand what is causing the problem. After multiple attempts, the installation which somewhat worked for me in WSL2 with docker desktop and ubuntu 20.04 as a distro (plus latest kubectl, helm 3 and k3d) includes everything except:

helm upgrade -i knative-provisioner-natss resources/knative-provisioner-natss -n knative-eventing &
helm upgrade -i nats-streaming resources/nats-streaming -n natss &
helm upgrade -i event-sources resources/event-sources -n kyma-system &

As soon as I start to add these components in different combinations and trying to install, they are causing different issues. And what is even more frustrating, the issues are not consistent. Sometimes some pods are constantly crashing, sometimes it's losing the connection to the cluster in the middle of installation, sometimes jobs exceed deadline or go into infinity.

pbochynski commented 4 years ago

Have you tried to start with these components? All the components after istio can be installed in random order. It would be good to verify if the problem is related to the limited resources or it is eventing to blame. Maybe try just with those component after istio:

helm upgrade -i ingress-dns-cert ingress-dns-cert --set $OVERRIDES -n istio-system & 
helm upgrade -i istio-kyma-patch resources/istio-kyma-patch -n istio-system &

helm upgrade -i knative-serving resources/knative-serving --set $OVERRIDES -n knative-serving &
helm upgrade -i knative-eventing resources/knative-eventing -n knative-eventing &

helm upgrade -i knative-provisioner-natss resources/knative-provisioner-natss -n knative-eventing &
helm upgrade -i nats-streaming resources/nats-streaming -n natss &
helm upgrade -i event-sources resources/event-sources -n kyma-system &
pbochynski commented 4 years ago

Closed due to inactivity