Open macevil opened 3 years ago
We have created an issue in Pivotal Tracker to manage this:
https://www.pivotaltracker.com/story/show/175707579
The labels on this github issue will be updated when the story is started.
okay, as far as i can see it's a problem with the network policies and, after we (@nrekretep) changed them a bit, all pods start. we investigate further.
Hello @macevil, your issue seems suspiciously similar to this istio issue. I am curious to know what changes you've made to the network policies that made the pods start? Thanks
cc: @astrieanna
Hello @kauana and @astrieanna , so we believe that the standard network policy set prevents necessary communications. we have deleted network policies until all pods are started:
now we have reinstalled cf and investigate the iptables for violations.
Hey @macevil , thanks for submitting this. We're wondering what's different about your environment that's causing this problem. We're currently testing across GKE, AKS, EKS, minikube, and kind and not seeing this issue in any of those environments.
We suspect that in addition to use_first_party_jwt_tokens: true
, you may also want to try enable_automount_service_account_token: true
Could you please try that out and let us know if you could use additional help from us?
good morning @jamespollard8, i have set the flag enable_automount_service_account_token: true
additionally, but no, the problem remains the same. The application only starts when I delete the network policies.
Hi @macevil,
Thanks for the giving that a try! With that, we're out of ideas for next troubleshooting steps short of actually digging into kubespray ourselves. Given the prioritization of IaaS managed clusters, we don't expect to be able to prioritize investigating kubespray further. In the meantime we're leaving this issue open.
We do want to state that while deleting the network policies may get the installation to complete now, we recommend keeping the network policies enabled for any production environments.
Thanks, Andrew and @jamespollard8
Do you have any updates on this issues? I have tried in Azure VM and experience same issues
`6:56:42AM: L ok: waiting on replicaset/cf-blobstore-minio-65cc549448 (apps/v1) namespace: cf-blobstore 6:56:42AM: L ongoing: waiting on pod/cf-blobstore-minio-65cc549448-shqn9 (v1) namespace: cf-blobstore 6:56:42AM: ^ Condition Ready is not True (False) 6:56:49AM: fail: reconcile deployment/kpack-controller (apps/v1) namespace: kpack 6:56:49AM: ^ Deployment is not progressing: ProgressDeadlineExceeded (message: ReplicaSet "kpack-controller-6c555dc47f" has timed out progressing.)
kapp: Error: waiting on reconcile deployment/kpack-controller (apps/v1) namespace: kpack: Finished unsuccessfully (Deployment is not progressing: ProgressDeadlineExceeded (message: ReplicaSet "kpack-controller-6c555dc47f" has timed out progressing.))`
For me solution were kind create cluster --config=./deploy/kind/cluster.yml --image kindest/node:v1.20.2 instead of kind create cluster --config=./deploy/kind/cluster.yml --image kindest/node:v1.21.1
Means - v1.20.2 version is wroking
Describe the bug
The 3 containers (cf-blobstore-minio, cf-db-postgresql, log-cache) remain in a loop during the installation, they hang due to an istio error. The error is the same between the main branch and the istio 1.7.4 branch. Exemplary shown at cf-db-postgresql-0:
To Reproduce*
Steps to reproduce the behavior:
cf-4-k8s
like described here: https://github.com/cloudfoundry/cf-for-k8s/blob/develop/docs/getting-started-tutorial.mdExpected behavior
A running cf-for-k8s installation.
Additional context
kubectl pod cf-db-postgresql-0 -n cf-db
kubectl logs cf-db-postgresql-0 istio-proxy -n cf-db
Deploy instructions
Cluster information
kubespray 2.14
CLI versions
ytt --version
: 0.30.0kapp --version
: 0.34.0kubectl version
: Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:04:18Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}cf version
: 7.1.0+4c3168f9a.2020-09-09