portworx / terraform-eksblueprints-portworx-addon

Github repo for Portworx EKS blueprint addon module and sample templates
Apache License 2.0
0 stars 4 forks source link

Error installing Portworx on AWS EKS 1.24 #14

Open neeldview opened 1 year ago

neeldview commented 1 year ago

BUG REPORT

What happened:

Tried Installing Portworx on Elastic Kubernetes Service using EKS Blueprints as mentioned in this blog

It was successful till the step to deploy the EKS Blueprints add-ons:

terraform apply -target="module.eks_blueprints_kubernetes_addons"

I changed the version to 1.24 in main.tf to deploy portworx on AWS EKS

but the portworx-pvc-controller pod is crashing

image

Attached the error from one of the pods

➜  ~ kc logs portworx-pvc-controller-854b7cf76c-5jgvp
I0216 09:55:29.096653       1 serving.go:348] Generated self-signed cert in-memory
W0216 09:55:29.096802       1 client_config.go:617] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0216 09:55:29.586201       1 controllermanager.go:180] Version: v1.24.8
I0216 09:55:29.586224       1 controllermanager.go:182] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0216 09:55:29.590544       1 secure_serving.go:210] Serving securely on [::]:10257
W0216 09:55:29.590569       1 controllermanager.go:678] --use-service-account-credentials was specified without providing a --service-account-private-key-file
F0216 09:55:29.590818       1 controllermanager.go:704] error creating lock: configmaps lock is removed, migrate to configmapsleases
goroutine 123 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0x1)
    vendor/k8s.io/klog/v2/klog.go:860 +0x8a
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x724e480, 0x3, 0x0, 0xc0001247e0, 0x1, {0x5cf3515?, 0x1?}, 0xc000074400?, 0x0)
    vendor/k8s.io/klog/v2/klog.go:825 +0x686
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printfDepth(0x724e480, 0x565110af?, 0x0, {0x0, 0x0}, 0x45edc32?, {0x45ed547, 0x17}, {0xc00005d510, 0x1, ...})
    vendor/k8s.io/klog/v2/klog.go:630 +0x1f2
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printf(...)
    vendor/k8s.io/klog/v2/klog.go:612
k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatalf(...)
    vendor/k8s.io/klog/v2/klog.go:1516
k8s.io/kubernetes/cmd/kube-controller-manager/app.leaderElectAndRun(0xc00000e760, {0xc000405770, 0x4f}, 0xc0002d2108, {0x7ffd565110af, 0xa}, {0x45edc32, 0x17}, {0xc00041d2e0, 0x47f3438, ...})
    cmd/kube-controller-manager/app/controllermanager.go:704 +0x2d3
created by k8s.io/kubernetes/cmd/kube-controller-manager/app.Run
    cmd/kube-controller-manager/app/controllermanager.go:269 +0xc1b

goroutine 1 [select (no cases)]:
k8s.io/kubernetes/cmd/kube-controller-manager/app.Run(0xc00000e760, 0xc0000a4300)
    cmd/kube-controller-manager/app/controllermanager.go:314 +0xc2d
k8s.io/kubernetes/cmd/kube-controller-manager/app.NewControllerManagerCommand.func2(0xc000380280?, {0xc000aebf80?, 0x0?, 0x4?})
    cmd/kube-controller-manager/app/controllermanager.go:137 +0x271
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000380280, {0xc00004c0b0, 0x4, 0x4})
    vendor/github.com/spf13/cobra/command.go:860 +0x663
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000380280)
    vendor/github.com/spf13/cobra/command.go:974 +0x3b4
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
    vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/vendor/k8s.io/component-base/cli.run(0xc000380280)
    vendor/k8s.io/component-base/cli/run.go:146 +0x305
k8s.io/kubernetes/vendor/k8s.io/component-base/cli.Run(0xc0000021a0?)
    vendor/k8s.io/component-base/cli/run.go:46 +0x1d
main.main()
    cmd/kube-controller-manager/controller-manager.go:36 +0x1e

goroutine 87 [select]:
k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc000296e00)
    vendor/go.opencensus.io/stats/view/worker.go:276 +0xad
created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
    vendor/go.opencensus.io/stats/view/worker.go:34 +0x8d

goroutine 120 [runnable]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*SecureServingInfo).tlsConfig.func5()
    vendor/k8s.io/apiserver/pkg/server/secure_serving.go:143
runtime.goexit()
    /usr/local/go/src/runtime/asm_amd64.s:1571 +0x1
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*SecureServingInfo).tlsConfig
    vendor/k8s.io/apiserver/pkg/server/secure_serving.go:143 +0x8aa

goroutine 117 [select]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*flushDaemon).run.func1()
    vendor/k8s.io/klog/v2/klog.go:1045 +0x11e
created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*flushDaemon).run
    vendor/k8s.io/klog/v2/klog.go:1041 +0x178

goroutine 112 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch.(*Broadcaster).loop(0xc000aead40)
    vendor/k8s.io/apimachinery/pkg/watch/mux.go:247 +0x57
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch.NewLongQueueBroadcaster
    vendor/k8s.io/apimachinery/pkg/watch/mux.go:89 +0x116

goroutine 129 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher.func1()
    vendor/k8s.io/client-go/tools/record/event.go:304 +0x73
created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher
    vendor/k8s.io/client-go/tools/record/event.go:302 +0x8c

goroutine 130 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher.func1()
    vendor/k8s.io/client-go/tools/record/event.go:304 +0x73
created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher
    vendor/k8s.io/client-go/tools/record/event.go:302 +0x8c

goroutine 131 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc0006bc660)
    vendor/k8s.io/client-go/util/workqueue/queue.go:271 +0xa7
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newQueue
    vendor/k8s.io/client-go/util/workqueue/queue.go:63 +0x1aa

goroutine 132 [select]:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0006bc7e0)
    vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:233 +0x305
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newDelayingQueue
    vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:70 +0x24f

goroutine 133 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc0006bc840)
    vendor/k8s.io/client-go/util/workqueue/queue.go:271 +0xa7
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newQueue
    vendor/k8s.io/client-go/util/workqueue/queue.go:63 +0x1aa

goroutine 134 [select]:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0006bca20)
    vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:233 +0x305
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newDelayingQueue
    vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:70 +0x24f

goroutine 135 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc0006bcae0)
    vendor/k8s.io/client-go/util/workqueue/queue.go:271 +0xa7
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newQueue
    vendor/k8s.io/client-go/util/workqueue/queue.go:63 +0x1aa

goroutine 136 [select]:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0006bcc60)
    vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:233 +0x305
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newDelayingQueue
    vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:70 +0x24f

goroutine 145 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc0006c79e0)
    vendor/k8s.io/client-go/util/workqueue/queue.go:271 +0xa7
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newQueue
    vendor/k8s.io/client-go/util/workqueue/queue.go:63 +0x1aa

goroutine 146 [select]:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0006c7b60)
    vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:233 +0x305
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newDelayingQueue
    vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:70 +0x24f

goroutine 140 [IO wait]:
internal/poll.runtime_pollWait(0x7f0566f35a38, 0x72)
    /usr/local/go/src/runtime/netpoll.go:302 +0x89
internal/poll.(*pollDesc).wait(0xc0009bd200?, 0xc000bba000?, 0x0)
    /usr/local/go/src/internal/poll/fd_poll_runtime.go:83 +0x32
internal/poll.(*pollDesc).waitRead(...)
    /usr/local/go/src/internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Read(0xc0009bd200, {0xc000bba000, 0x1cfb, 0x1cfb})
    /usr/local/go/src/internal/poll/fd_unix.go:167 +0x25a
net.(*netFD).Read(0xc0009bd200, {0xc000bba000?, 0xc0005d3ee0?, 0xc000bba005?})
    /usr/local/go/src/net/fd_posix.go:55 +0x29
net.(*conn).Read(0xc000436040, {0xc000bba000?, 0x7f05673c9fff?, 0x7f056722a600?})
    /usr/local/go/src/net/net.go:183 +0x45
crypto/tls.(*atLeastReader).Read(0xc0003c6270, {0xc000bba000?, 0x0?, 0x0?})
    /usr/local/go/src/crypto/tls/conn.go:785 +0x3d
bytes.(*Buffer).ReadFrom(0xc000a2e278, {0x4cfc660, 0xc0003c6270})
    /usr/local/go/src/bytes/buffer.go:204 +0x98
crypto/tls.(*Conn).readFromUntil(0xc000a2e000, {0x4d04b80?, 0xc000436040}, 0x1cfb?)
    /usr/local/go/src/crypto/tls/conn.go:807 +0xe5
crypto/tls.(*Conn).readRecordOrCCS(0xc000a2e000, 0x0)
    /usr/local/go/src/crypto/tls/conn.go:614 +0x116
crypto/tls.(*Conn).readRecord(...)
    /usr/local/go/src/crypto/tls/conn.go:582
crypto/tls.(*Conn).Read(0xc000a2e000, {0xc000bb1000, 0x1000, 0x91a680?})
    /usr/local/go/src/crypto/tls/conn.go:1285 +0x16f
bufio.(*Reader).Read(0xc0006c6f60, {0xc00027a740, 0x9, 0x937002?})
    /usr/local/go/src/bufio/bufio.go:236 +0x1b4
io.ReadAtLeast({0x4cfc480, 0xc0006c6f60}, {0xc00027a740, 0x9, 0x9}, 0x9)
    /usr/local/go/src/io/io.go:331 +0x9a
io.ReadFull(...)
    /usr/local/go/src/io/io.go:350
k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader({0xc00027a740?, 0x9?, 0xc00184fa40?}, {0x4cfc480?, 0xc0006c6f60?})
    vendor/golang.org/x/net/http2/frame.go:237 +0x6e
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc00027a700)
    vendor/golang.org/x/net/http2/frame.go:498 +0x95
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000ba7f98)
    vendor/golang.org/x/net/http2/transport.go:2101 +0x130
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc000b11800)
    vendor/golang.org/x/net/http2/transport.go:1997 +0x6f
created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn
    vendor/golang.org/x/net/http2/transport.go:725 +0xa65

goroutine 147 [select]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*SecureServingInfo).tlsConfig.func1()
    vendor/k8s.io/apiserver/pkg/server/secure_serving.go:100 +0x71
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*SecureServingInfo).tlsConfig
    vendor/k8s.io/apiserver/pkg/server/secure_serving.go:99 +0x41e

goroutine 119 [runnable]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*SecureServingInfo).tlsConfig.func2()
    vendor/k8s.io/apiserver/pkg/server/secure_serving.go:114
runtime.goexit()
    /usr/local/go/src/runtime/asm_amd64.s:1571 +0x1
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*SecureServingInfo).tlsConfig
    vendor/k8s.io/apiserver/pkg/server/secure_serving.go:114 +0x5b5

goroutine 121 [runnable]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.RunServer.func1()
    vendor/k8s.io/apiserver/pkg/server/secure_serving.go:232
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.RunServer
    vendor/k8s.io/apiserver/pkg/server/secure_serving.go:232 +0xea

goroutine 122 [runnable]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.RunServer.func2()
    vendor/k8s.io/apiserver/pkg/server/secure_serving.go:240
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.RunServer
    vendor/k8s.io/apiserver/pkg/server/secure_serving.go:240 +0x18a
➜  ~ terraform apply -target="module.eks_blueprint
pragrawal-px commented 1 year ago

This is caused because the version of Operator currently on the master branch is 1.9.0 which doesn't support K8s 1.24. The newer operator version 1.10.0 has support for it and there is already a PR in review with the latest operator and Px-enterprise version and we are trying to get it merged soon.

Till then you should try editing the operator deployment and changing the version to 1.10.0 or 1.10.3. This will automatically take care of this issue and redeploy the correct PVC controllers

Also for reference if you want to set the operator version through the Blueprint script, I have added a code block below with an example

enable_portworx = true

portworx_helm_config = {
  set = [
    {
      name= "pxOperatorImageVersion"
      value= "1.9.0"
    },
    {
      name="imageVersion"
      value="2.11.0"
    }
  ]
}
neeldview commented 1 year ago

ok, thanks I've used version 1.22 and its installed but requesting to push this change fast as k8s 1.26 is out now. also would like to report a typo mistake in the installation Doc

the command to install IAM policy should be changed to

terraform apply -target="aws_iam_policy.portworx_eksblueprint_volumeAccess"

from

terraform apply -target="aws_iam_policy.portworx_eksblueprint_volume_access"

as in Github the resource name for creating the TF has its name like "portworx_eksblueprint_volumeAccess"