Open neeldview opened 1 year ago
This is caused because the version of Operator currently on the master branch is 1.9.0 which doesn't support K8s 1.24. The newer operator version 1.10.0 has support for it and there is already a PR in review with the latest operator and Px-enterprise version and we are trying to get it merged soon.
Till then you should try editing the operator deployment and changing the version to 1.10.0 or 1.10.3. This will automatically take care of this issue and redeploy the correct PVC controllers
Also for reference if you want to set the operator version through the Blueprint script, I have added a code block below with an example
enable_portworx = true
portworx_helm_config = {
set = [
{
name= "pxOperatorImageVersion"
value= "1.9.0"
},
{
name="imageVersion"
value="2.11.0"
}
]
}
ok, thanks I've used version 1.22 and its installed but requesting to push this change fast as k8s 1.26 is out now. also would like to report a typo mistake in the installation Doc
the command to install IAM policy should be changed to
terraform apply -target="aws_iam_policy.portworx_eksblueprint_volumeAccess"
from
terraform apply -target="aws_iam_policy.portworx_eksblueprint_volume_access"
as in Github the resource name for creating the TF has its name like "portworx_eksblueprint_volumeAccess"
BUG REPORT
What happened:
Tried Installing Portworx on Elastic Kubernetes Service using EKS Blueprints as mentioned in this blog
It was successful till the step to deploy the EKS Blueprints add-ons:
terraform apply -target="module.eks_blueprints_kubernetes_addons"
I changed the version to 1.24 in main.tf to deploy portworx on AWS EKS
but the portworx-pvc-controller pod is crashing
Attached the error from one of the pods