wireapp / wire-server-deploy

Code to install/deploy wire-server (on kubernetes)
https://docs.wire.com
GNU Affero General Public License v3.0
94 stars 45 forks source link

Unable to install wire-server with helm #246

Open ericklind opened 4 years ago

ericklind commented 4 years ago

When I try to install the server demo, I have the following error:

helm upgrade --install databases-ephemeral wire/databases-ephemeral --wait
Release "databases-ephemeral" does not exist. Installing it now.
Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"

I'm very new to kubernetes so I have no idea how to ever troubleshoot this.

Thanks.

jschaul commented 4 years ago

Hi, thanks for the report. Can you tell me which version of helm and kubernetes you're using? I.e. can you give me the output from:

helm version
kubectl version
ericklind commented 4 years ago
kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.1", GitCommit:"7879fc12a63337efff607952a323df90cdc7a335", GitTreeState:"clean", BuildDate:"2020-04-10T21:53:51Z", GoVersion:"go1.14.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.6", GitCommit:"72c30166b2105cd7d3350f2c28a219e6abcd79eb", GitTreeState:"clean", BuildDate:"2020-01-18T23:23:21Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
helm version
version.BuildInfo{Version:"v3.2.0", GitCommit:"e11b7ce3b12db2941e90399e874513fbd24bcb71", GitTreeState:"clean", GoVersion:"go1.13.10"}
jschaul commented 4 years ago

Thanks! We know the code works for kubernetes server versions v1.12.x and v1.14.x (which are old, we need to upgrade, that's planned) and helm v3.1.1.

I think this issue might be fixed by this PR once it's complete and merged: https://github.com/wireapp/wire-server-deploy/pull/203

So I think you have two options: 1) use an older version of kubernetes (1.14 or maybe 1.15 works as well), 2) wait a little.

ericklind commented 4 years ago

I changed to the previous versions and that seemed to move things along. Now I'm at the point where I install the wire-server, but the directions are a little confusing. it says:

Change back to the wire-server-deploy directory. Copy example demo values and secrets:

Where is this? There is no reference to it beforehand so how do I "change back" to it? It does have alternate directions but again, they do not specify where. Is this on the kube machine? Local machine? These directions are making some assumptions that we know where these things go, and that gets confusing.

ericklind commented 4 years ago

I'm also running this on a kube cluster on DigitalOcean, so I don't know how to create the zauth.txt file: docker run --rm quay.io/wire/alpine-intermediate /dist/zauth -m gen-keypair -i 1 > zauth.txt

Is there any other way to do this?

jschaul commented 4 years ago

I'm also running this on a kube cluster on DigitalOcean, so I don't know how to create the zauth.txt file: docker run --rm quay.io/wire/alpine-intermediate /dist/zauth -m gen-keypair -i 1 > zauth.txt

Is there any other way to do this?

You do this locally, for that you need to have docker available locally, or on any server you control.

Alternatively, you can compile all of https://github.com/wireapp/wire-server yourself and run ./dist/zauth with the same arguments - but that is a much more involved setup (getting Haskell development environment up takes a few hours)

ericklind commented 4 years ago

Okay, I was able to get that part working. Now I'm at the DNS records.

  1. What records are these? A records?
  2. It says "(Yes, they all need to point to the same IP address - this is necessary for the nginx ingress to know how to do internal routing based on virtual hosting.)" What IP address is this? In looking at the dashboard each of the items listed has an IP address. How do I know which one to point them at?
ericklind commented 4 years ago

I got a load balancer setup, added the SSL, pointed all the domains to the load balancer. I forwarded port 443 -> 31773, and 80 -> 31773. I added a Let's Encrypt cert for the domain, and I'm using it on the HTTPS entry.

But when I run curl -i https://nginz-https.<my_domain>/status

I get

HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html

<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>

If I set the cert to Passthrough, I get: curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to nginz-https.<my_domain>.com:443

If I set to TCP, then it doesn't work either.

Is there something that I'm missing?

ericklind commented 4 years ago

I'm still unable to get this up and running. I'd love to give it a try and see it in action, but there is so much lacking in the documentation about how to actually set this up. I could really use some help.

jschaul commented 4 years ago

It sounds like the IPs you're pointing your load balancer to are not correct, or the nginz component (part of wire-server) is not up due to a misconfiguration, or the nginx-ingress-services and nginx-ingress-controller charts were not installed or are misconfigured.

What does kubectl get pods --all-namespaces return? All "Running" or "Completed", or maybe some errors? This is an example "expected" output for a (specific variation of a) multi-node setup.

NAMESPACE     NAME                                                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-7646874c97-gwbsz                                         1/1     Running   0          39d
kube-system   coredns-7646874c97-h6jrd                                         1/1     Running   0          39d
kube-system   dns-autoscaler-56c969bdb8-v6stv                                  1/1     Running   0          39d
kube-system   kube-apiserver-kubernetenode0                                    1/1     Running   9          41d
kube-system   kube-apiserver-kubernetenode1                                    1/1     Running   8          41d
kube-system   kube-apiserver-kubernetenode2                                    1/1     Running   10         41d
kube-system   kube-controller-manager-kubernetenode0                           1/1     Running   11         41d
kube-system   kube-controller-manager-kubernetenode1                           1/1     Running   10         41d
kube-system   kube-controller-manager-kubernetenode2                           1/1     Running   9          41d
kube-system   kube-flannel-2ksmp                                               2/2     Running   19         41d
kube-system   kube-flannel-mmppj                                               2/2     Running   47         41d
kube-system   kube-flannel-x98vn                                               2/2     Running   21         41d
kube-system   kube-proxy-ndhsn                                                 1/1     Running   8          41d
kube-system   kube-proxy-qwqwf                                                 1/1     Running   8          41d
kube-system   kube-proxy-scqjv                                                 1/1     Running   7          41d
kube-system   kube-scheduler-kubernetenode0                                    1/1     Running   10         41d
kube-system   kube-scheduler-kubernetenode1                                    1/1     Running   9          41d
kube-system   kube-scheduler-kubernetenode2                                    1/1     Running   9          41d
kube-system   kubernetes-dashboard-6c7466966c-8wth8                            1/1     Running   0          39d
kube-system   nodelocaldns-25zh5                                               1/1     Running   7          41d
kube-system   nodelocaldns-7g4l2                                               1/1     Running   7          41d
kube-system   nodelocaldns-tfbsc                                               1/1     Running   9          41d
wire          account-pages-6896b8f555-zrdfb                                   1/1     Running   0          39d
wire          brig-6b56689f65-shctf                                            1/1     Running   5          39d
wire          cannon-0                                                         1/1     Running   0          39d
wire          cargohold-577bf5d7ff-57dd6                                       1/1     Running   0          39d
wire          demo-smtp-8656864598-4bkzp                                       1/1     Running   0          39d
wire          fake-aws-dynamodb-5757d74b76-hkx28                               2/2     Running   0          39d
wire          fake-aws-sns-5c56774d95-2lmf5                                    2/2     Running   0          39d
wire          fake-aws-sqs-554bbc684d-jsf6r                                    2/2     Running   0          39d
wire          galley-8d766fd95-85d25                                           1/1     Running   0          39d
wire          gundeck-5879d8bb85-xgx4j                                         1/1     Running   5          39d
wire          nginz-5f595954bf-b5tdl                                           2/2     Running   0          39d
wire          nginz-5f595954bf-df7th                                           2/2     Running   0          39d
wire          nginz-5f595954bf-nr8kc                                           2/2     Running   0          39d
wire          reaper-55ccbcfbfd-gshdh                                          1/1     Running   0          39d
wire          redis-ephemeral-69bb4885bb-mgll8                                 1/1     Running   0          39d
wire          spar-84996bc9b8-hv7qh                                            1/1     Running   0          39d
wire          team-settings-558b6ffd55-cxr7t                                   1/1     Running   0          39d
wire          webapp-5584f9d987-mbdkr                                          1/1     Running   0          39d
wire          wire-nginx-ingress-controller-controller-8nlkz                   1/1     Running   0          39d
wire          wire-nginx-ingress-controller-controller-pl5s6                   1/1     Running   0          39d
wire          wire-nginx-ingress-controller-controller-zbw99                   1/1     Running   0          39d
wire          wire-nginx-ingress-controller-default-backend-6fd66997b4-qqn5r   1/1     Running   0          39d

Your mileage may vary a little on the demo setup.

Documentation is work-in-progress. Yes, we would like to make it easier to install, and we are aware it's a little rough around the edges in some parts.

jschaul commented 4 years ago

And the following is an example "expected" output on kubectl get pods --all-namespaces on a demo single-node machine:

kubectl get pods --all-namespaces
NAMESPACE     NAME                                                       READY   STATUS      RESTARTS   AGE
default       account-pages-6847d74d49-gb6xn                             1/1     Running     0          69d
default       brig-66f6b88c4b-kcqm4                                      1/1     Running     0          11d
default       brig-index-migrate-data-wd86v                              0/1     Completed   0          11d
default       cannon-0                                                   1/1     Running     0          11d
default       cargohold-6d8ff9fcf7-5tpz5                                 1/1     Running     0          11d
default       cassandra-ephemeral-0                                      1/1     Running     0          98d
default       cassandra-migrations-7xlqw                                 0/1     Completed   0          11d
default       demo-smtp-84b7b85ff6-vz2gw                                 1/1     Running     0          98d
default       elasticsearch-ephemeral-8545b66bcc-k9lkq                   1/1     Running     0          98d
default       elasticsearch-index-create-r6jbw                           0/1     Completed   0          11d
default       elasticsearch-index-kzqj5                                  0/1     Completed   0          69d
default       fake-aws-dynamodb-84f87cd86b-hsqll                         2/2     Running     0          11d
default       fake-aws-s3-5c846cb5d8-ts8tr                               1/1     Running     0          98d
default       fake-aws-s3-reaper-7c6d9cddd6-lqxh6                        1/1     Running     0          98d
default       fake-aws-sns-5c56774d95-c95p9                              2/2     Running     0          98d
default       fake-aws-sqs-554bbc684d-g5dj7                              2/2     Running     0          98d
default       galley-7b86d6c6f-f7ggv                                     1/1     Running     0          11d
default       gundeck-5df78c7d-ct4zn                                     1/1     Running     0          11d
default       nginx-ingress-controller-controller-h942k                  1/1     Running     0          11d
default       nginx-ingress-controller-default-backend-9866b44fd-2v8dz   1/1     Running     0          11d
default       nginz-7b9dd59ff6-vc5kr                                     2/2     Running     0          11d
default       redis-ephemeral-69bb4885bb-rpvq4                           1/1     Running     0          98d
default       spar-79b94bc8d6-bwz45                                      1/1     Running     0          11d
default       team-settings-7957dc5bd7-f68gj                             1/1     Running     0          69d
default       webapp-6dbdfb64bc-pbcvt                                    1/1     Running     0          11d
kube-system   coredns-56bc6b976d-h8g2c                                   0/1     Pending     0          98d
kube-system   coredns-56bc6b976d-mlbjp                                   1/1     Running     0          98d
kube-system   dns-autoscaler-56c969bdb8-8jrzm                            1/1     Running     0          98d
kube-system   kube-apiserver-qa-demo-kubenode01                          1/1     Running     0          98d
kube-system   kube-controller-manager-qa-demo-kubenode01                 1/1     Running     5          98d
kube-system   kube-flannel-f28bl                                         2/2     Running     0          98d
kube-system   kube-proxy-j2rbl                                           1/1     Running     0          89d
kube-system   kube-scheduler-qa-demo-kubenode01                          1/1     Running     3          98d
kube-system   kubernetes-dashboard-6c7466966c-t9lqp                      1/1     Running     0          98d
kube-system   nodelocaldns-g4rc5                                         1/1     Running     0          98d
kube-system   tiller-deploy-86c8b7897c-578gc                             1/1     Running     0          98d

kubectl describe nodes | grep public-ip should give you the IPs of kubernetes nodes.

lucendio commented 4 years ago

@ericklind In case your issue has been resolved, we'd like to ask you closing it. Otherwise, please follow up on this in order to continue the conversation. Thank you.

mtahle commented 4 years ago

When I try to install the server demo, I have the following error:

helm upgrade --install databases-ephemeral wire/databases-ephemeral --wait
Release "databases-ephemeral" does not exist. Installing it now.
Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"

I'm very new to kubernetes so I have no idea how to ever troubleshoot this.

Thanks.

Hello,

I have faced the same error, it is because this chart is for the older K8s api version. I searched the internet and fixed by doing the following: 1- fetch the chart: helm fetch --untar wire/databases-ephemeral 2- open file ./databases-ephemeral/charts/redis-ephemeral/charts/redis/templates/deployment.yaml with text editor and replace "apiVersion:extensions/v1beta1" with "apiVersion: apps/v1" 3- Add spec.selector.matchLabels:

spec:
[...]
selector:
    matchLabels:
       app: {{ template "redis.fullname" . }}
[...]

4- helm install databases-ephemeral ./ --wait

Reference: stackoverflow

lucendio commented 4 years ago

Thank you for your effort @mtahle. We track this internally and give updates here accordingly.