Closed ticruz38 closed 6 years ago
Not sure about that. Maybe you want to open an issue in portworx/px-dev instead.
The thing is that it worked at firts try. It could'nt make a volume though since I didnt attached anything on /dev/vdb. It seems like when running a new container, kubernetes is trying to look at the oldest one, in the termination-log files but can't find it. Kubernetes says all pods are removed.
terraform destroy => terraform apply fixed everything.... Seems like at the first deployment volumes at /dev/vdb were not attach. Thanks for the great work!
I tried to deploy portworx, but forgot to change the variables PX_STORAGE_SERVICE which was set to /dev/vdb instead of /dev/vda. So I manually deleted the daemon_set and the storage class, and redeployed using
kubectl apply -f storage
. However it seems that smth is broken as I end up with 3 pods in CrashLoopBackOff state. Here is the log from one of the pod:container_linux.go:247: starting container process caused "process_linux.go:359: container init caused \"rootfs_linux.go:54: mounting \\\"/var/lib/kubelet/pods/aad88e78-09c2-11e8-8395-de2b44396007/containers/portworx-storage-1/ccd4ada6\\\" to rootfs \\\"/var/lib/docker/overlay2/04e4c34c1d0f67a09e9f3b4feef3cadd6e4b1e8e016fae48762b250798769803/merged\\\" at \\\"/var/lib/docker/overlay2/04e4c34c1d0f67a09e9f3b4feef3cadd6e4b1e8e016fae48762b250798769803/merged/dev/termination-log\\\" caused \\\"no such file or directory\\\"\""