Open cheslijones opened 4 years ago
@eox-dev It sounds like you have some slow-starting pods. You can increase the status-check deadline using statusCheckDeadlineSeconds
in your skaffold.yaml
.
That warning message comes from a new codepath for forwarding services (#4590). The code falls back to using kubectl port-forward svc/admin-new-cluster-ip-service-dev
, which was the previous behaviour. You can set the environment variable SKAFFOLD_DISABLE_SERVICE_FORWARDING=1
to disable this new service mapping behaviour.
Could you please try running skaffold dev -v trace
and report the results here?
@briandealwis Thanks, I will adjust the statusCheckDeadlineSeconds to see if any amount will permit enough time for the pods to start.
Here is the skaffold dev -v trace
.
$ skaffold dev -v trace
INFO[0000] starting gRPC server on port 50051
INFO[0000] starting gRPC HTTP server on port 50052
INFO[0000] Skaffold &{Version:v1.14.0 ConfigVersion:skaffold/v2beta7 GitVersion: GitCommit:4b3ca59af505c4ab5d5b6960a194e1f6887018f8 GitTreeState:dirty BuildDate:2020-09-03T01:30:28Z GoVersion:go1.15 Compiler:gc Platform:darwin/amd64}
DEBU[0000] Update check not enabled, skipping.
INFO[0000] Loaded Skaffold defaults from "/Users/eox-dev/.skaffold/config"
DEBU[0000] config version "skaffold/v2beta1" out of date: upgrading to latest "skaffold/v2beta7"
DEBU[0000] could not parse date ""
TRAC[0000] validating yamltags of struct SkaffoldConfig
TRAC[0000] validating yamltags of struct Metadata
TRAC[0000] validating yamltags of struct Pipeline
TRAC[0000] validating yamltags of struct BuildConfig
TRAC[0000] validating yamltags of struct Artifact
TRAC[0000] validating yamltags of struct Sync
TRAC[0000] validating yamltags of struct SyncRule
TRAC[0000] validating yamltags of struct SyncRule
TRAC[0000] validating yamltags of struct SyncRule
TRAC[0000] validating yamltags of struct SyncRule
TRAC[0000] validating yamltags of struct SyncRule
TRAC[0000] validating yamltags of struct SyncRule
TRAC[0000] validating yamltags of struct ArtifactType
TRAC[0000] validating yamltags of struct DockerArtifact
TRAC[0000] validating yamltags of struct TagPolicy
TRAC[0000] validating yamltags of struct GitTagger
TRAC[0000] validating yamltags of struct BuildType
TRAC[0000] validating yamltags of struct LocalBuild
TRAC[0000] validating yamltags of struct DeployConfig
TRAC[0000] validating yamltags of struct DeployType
TRAC[0000] validating yamltags of struct KubectlDeploy
TRAC[0000] validating yamltags of struct KubectlFlags
TRAC[0000] validating yamltags of struct LogsConfig
INFO[0000] Using kubectl context: minikube
DEBU[0000] Using builder: local
DEBU[0000] Running command: [/usr/local/bin/minikube docker-env --shell none -p minikube]
DEBU[0000] Command output: [DOCKER_TLS_VERIFY=1
DOCKER_HOST=tcp://192.168.64.5:2376
DOCKER_CERT_PATH=/Users/eox-dev/.minikube/certs
MINIKUBE_ACTIVE_DOCKERD=minikube
]
DEBU[0000] setting Docker user agent to skaffold-v1.14.0
Listing files to watch...
- companyappacr.azurecr.io/company-app-admin-new
TRAC[0000] Checking base image node:13-alpine for ONBUILD triggers.
DEBU[0000] Found dependencies for dockerfile: [{package.json /app true} {. /app true}]
DEBU[0000] Skipping excluded path: node_modules
INFO[0000] List generated in 5.088259ms
Generating tags...
- companyappacr.azurecr.io/company-app-admin-new -> DEBU[0000] Running command: [git describe --tags --always]
DEBU[0000] Command output: [221a3ff
]
DEBU[0000] Running command: [git status . --porcelain]
DEBU[0001] Command output: [?? admin-new/.dockerignore
?? admin-new/Dockerfile
?? admin-new/Dockerfile.dev
?? admin-new/nginx/
?? admin-new/package-lock.json
?? admin-new/package.json
?? admin-new/public/
?? admin-new/src/
]
companyappacr.azurecr.io/company-app-admin-new:221a3ff-dirty
INFO[0001] Tags generated in 46.303831ms
Checking cache...
TRAC[0001] Checking base image node:13-alpine for ONBUILD triggers.
DEBU[0001] Found dependencies for dockerfile: [{package.json /app true} {. /app true}]
DEBU[0001] Skipping excluded path: node_modules
- companyappacr.azurecr.io/company-app-admin-new: Found Locally
INFO[0001] Cache check complete in 14.683552ms
Tags used in deployment:
- companyappacr.azurecr.io/company-app-admin-new -> companyappacr.azurecr.io/company-app-admin-new:4301a773218d4615aaa5a9a659d4503d07d0cc10d27bfd0bd0e61107adf1ea55
DEBU[0001] Local images can't be referenced by digest.
They are tagged and referenced by a unique, local only, tag instead.
See https://skaffold.dev/docs/pipeline-stages/taggers/#how-tagging-works
DEBU[0001] getting client config for kubeContext: ``
Starting deploy...
DEBU[0001] Running command: [kubectl version --client -ojson]
DEBU[0001] Command output: [{
"clientVersion": {
"major": "1",
"minor": "16+",
"gitVersion": "v1.16.6-beta.0",
"gitCommit": "e7f962ba86f4ce7033828210ca3556393c377bcc",
"gitTreeState": "clean",
"buildDate": "2020-01-15T08:26:26Z",
"goVersion": "go1.13.5",
"compiler": "gc",
"platform": "darwin/amd64"
}
}
]
DEBU[0001] Running command: [kubectl --context minikube create --dry-run -oyaml -f /Users/eox-dev/Projects/current/company/company-app/manifests/dev/ingress.yaml -f /Users/eox-dev/Projects/current/company/company-app/manifests/dev/admin-new.yaml]
DEBU[0001] Command output: [apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
name: ingress-service-dev
namespace: default
spec:
rules:
- http:
paths:
- backend:
serviceName: admin-new-cluster-ip-service-dev
servicePort: 4001
path: /admin/?(.*)
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: admin-new-deployment-dev
namespace: default
spec:
replicas: 1
selector:
matchLabels:
component: admin-new
environment: development
template:
metadata:
labels:
component: admin-new
environment: development
spec:
containers:
- env:
- name: PGUSER
valueFrom:
secretKeyRef:
key: PGUSER
name: company-app-dev-secrets
- name: PGHOST
value: postgres-cluster-ip-service-dev
- name: PGPORT
value: "1423"
- name: PGDATABASE
valueFrom:
secretKeyRef:
key: PGDATABASE
name: company-app-dev-secrets
- name: PGPASSWORD
valueFrom:
secretKeyRef:
key: PGPASSWORD
name: company-app-dev-secrets
- name: SECRET_KEY
valueFrom:
secretKeyRef:
key: SECRET_KEY
name: company-app-dev-secrets
- name: SENDGRID_API_KEY
valueFrom:
secretKeyRef:
key: SENDGRID_API_KEY
name: company-app-dev-secrets
- name: DOMAIN
valueFrom:
secretKeyRef:
key: DOMAIN
name: company-app-dev-secrets
- name: DEBUG
valueFrom:
secretKeyRef:
key: DEBUG
name: company-app-dev-secrets
image: companyappacr.azurecr.io/company-app-admin-new
name: admin-new
ports:
- containerPort: 4001
volumeMounts:
- mountPath: /mnt/company-files/client-submissions
name: file-storage-dev
subPath: client-submissions
- mountPath: /mnt/company-files/client-downloads
name: file-storage-dev
subPath: client-downloads
- mountPath: /mnt/company-files/client-logos
name: file-storage-dev
subPath: client-logos
- mountPath: /mnt/company-files/xls-exports
name: file-storage-dev
subPath: xls-exports
- mountPath: /mnt/company-files/xls-imports
name: file-storage-dev
subPath: xls-imports
volumes:
- name: file-storage-dev
persistentVolumeClaim:
claimName: file-storage-dev
---
apiVersion: v1
kind: Service
metadata:
name: admin-new-cluster-ip-service-dev
namespace: default
spec:
ports:
- port: 4001
targetPort: 4001
selector:
component: admin-new
environment: development
type: ClusterIP
]
DEBU[0001] manifests with tagged images: apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
name: ingress-service-dev
namespace: default
spec:
rules:
- http:
paths:
- backend:
serviceName: admin-new-cluster-ip-service-dev
servicePort: 4001
path: /admin/?(.*)
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: admin-new-deployment-dev
namespace: default
spec:
replicas: 1
selector:
matchLabels:
component: admin-new
environment: development
template:
metadata:
labels:
component: admin-new
environment: development
spec:
containers:
- env:
- name: PGUSER
valueFrom:
secretKeyRef:
key: PGUSER
name: company-app-dev-secrets
- name: PGHOST
value: postgres-cluster-ip-service-dev
- name: PGPORT
value: "1423"
- name: PGDATABASE
valueFrom:
secretKeyRef:
key: PGDATABASE
name: company-app-dev-secrets
- name: PGPASSWORD
valueFrom:
secretKeyRef:
key: PGPASSWORD
name: company-app-dev-secrets
- name: SECRET_KEY
valueFrom:
secretKeyRef:
key: SECRET_KEY
name: company-app-dev-secrets
- name: SENDGRID_API_KEY
valueFrom:
secretKeyRef:
key: SENDGRID_API_KEY
name: company-app-dev-secrets
- name: DOMAIN
valueFrom:
secretKeyRef:
key: DOMAIN
name: company-app-dev-secrets
- name: DEBUG
valueFrom:
secretKeyRef:
key: DEBUG
name: company-app-dev-secrets
image: companyappacr.azurecr.io/company-app-admin-new:4301a773218d4615aaa5a9a659d4503d07d0cc10d27bfd0bd0e61107adf1ea55
name: admin-new
ports:
- containerPort: 4001
volumeMounts:
- mountPath: /mnt/company-files/client-submissions
name: file-storage-dev
subPath: client-submissions
- mountPath: /mnt/company-files/client-downloads
name: file-storage-dev
subPath: client-downloads
- mountPath: /mnt/company-files/client-logos
name: file-storage-dev
subPath: client-logos
- mountPath: /mnt/company-files/xls-exports
name: file-storage-dev
subPath: xls-exports
- mountPath: /mnt/company-files/xls-imports
name: file-storage-dev
subPath: xls-imports
volumes:
- name: file-storage-dev
persistentVolumeClaim:
claimName: file-storage-dev
---
apiVersion: v1
kind: Service
metadata:
name: admin-new-cluster-ip-service-dev
namespace: default
spec:
ports:
- port: 4001
targetPort: 4001
selector:
component: admin-new
environment: development
type: ClusterIP
DEBU[0001] manifests with labels apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
labels:
app.kubernetes.io/managed-by: skaffold
skaffold.dev/run-id: a389317a-7e19-408f-947b-47aeb112326c
name: ingress-service-dev
namespace: default
spec:
rules:
- http:
paths:
- backend:
serviceName: admin-new-cluster-ip-service-dev
servicePort: 4001
path: /admin/?(.*)
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/managed-by: skaffold
skaffold.dev/run-id: a389317a-7e19-408f-947b-47aeb112326c
name: admin-new-deployment-dev
namespace: default
spec:
replicas: 1
selector:
matchLabels:
component: admin-new
environment: development
template:
metadata:
labels:
app.kubernetes.io/managed-by: skaffold
component: admin-new
environment: development
skaffold.dev/run-id: a389317a-7e19-408f-947b-47aeb112326c
spec:
containers:
- env:
- name: PGUSER
valueFrom:
secretKeyRef:
key: PGUSER
name: company-app-dev-secrets
- name: PGHOST
value: postgres-cluster-ip-service-dev
- name: PGPORT
value: "1423"
- name: PGDATABASE
valueFrom:
secretKeyRef:
key: PGDATABASE
name: company-app-dev-secrets
- name: PGPASSWORD
valueFrom:
secretKeyRef:
key: PGPASSWORD
name: company-app-dev-secrets
- name: SECRET_KEY
valueFrom:
secretKeyRef:
key: SECRET_KEY
name: company-app-dev-secrets
- name: SENDGRID_API_KEY
valueFrom:
secretKeyRef:
key: SENDGRID_API_KEY
name: company-app-dev-secrets
- name: DOMAIN
valueFrom:
secretKeyRef:
key: DOMAIN
name: company-app-dev-secrets
- name: DEBUG
valueFrom:
secretKeyRef:
key: DEBUG
name: company-app-dev-secrets
image: companyappacr.azurecr.io/company-app-admin-new:4301a773218d4615aaa5a9a659d4503d07d0cc10d27bfd0bd0e61107adf1ea55
name: admin-new
ports:
- containerPort: 4001
volumeMounts:
- mountPath: /mnt/company-files/client-submissions
name: file-storage-dev
subPath: client-submissions
- mountPath: /mnt/company-files/client-downloads
name: file-storage-dev
subPath: client-downloads
- mountPath: /mnt/company-files/client-logos
name: file-storage-dev
subPath: client-logos
- mountPath: /mnt/company-files/xls-exports
name: file-storage-dev
subPath: xls-exports
- mountPath: /mnt/company-files/xls-imports
name: file-storage-dev
subPath: xls-imports
volumes:
- name: file-storage-dev
persistentVolumeClaim:
claimName: file-storage-dev
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/managed-by: skaffold
skaffold.dev/run-id: a389317a-7e19-408f-947b-47aeb112326c
name: admin-new-cluster-ip-service-dev
namespace: default
spec:
ports:
- port: 4001
targetPort: 4001
selector:
component: admin-new
environment: development
type: ClusterIP
DEBU[0000] Running command: [kubectl --context minikube get -f - --ignore-not-found -ojson]
DEBU[0000] Command output: []
DEBU[0000] 3 manifests to deploy. 3 are updated or new
DEBU[0000] Running command: [kubectl --context minikube apply -f -]
- ingress.networking.k8s.io/ingress-service-dev created
- deployment.apps/admin-new-deployment-dev created
- service/admin-new-cluster-ip-service-dev created
INFO[0001] Deploy complete in 484.626776ms
Waiting for deployments to stabilize...
DEBU[0001] getting client config for kubeContext: ``
DEBU[0001] checking status deployment/admin-new-deployment-dev
DEBU[0002] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0002] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0003] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0003] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0004] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0004] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0005] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0005] Command output: [Waiting for deployment spec update to be observed...
]
- deployment/admin-new-deployment-dev: waiting for deployment spec update to be observed...
DEBU[0006] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0006] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0008] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0008] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0009] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0009] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0010] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0010] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0011] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0011] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0012] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0012] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0013] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0014] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0015] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0015] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0016] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0016] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0017] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0017] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0018] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0018] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0019] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0020] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0021] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0021] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0022] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0022] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0023] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0023] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0024] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0024] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0025] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0026] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0027] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0027] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0028] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0028] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0029] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0029] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0030] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0030] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0031] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0032] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0033] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0033] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0034] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0034] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0035] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0035] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0036] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0036] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0037] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0037] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0038] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0039] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0040] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0040] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0041] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0041] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0042] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0042] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0043] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0043] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0044] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0045] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0046] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0046] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0047] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0047] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0048] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0048] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0049] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0049] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0050] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0051] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0052] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0052] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0053] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0053] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0054] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0054] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0055] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0055] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0056] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0057] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0058] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0058] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0059] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0059] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0060] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0060] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0061] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0061] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0062] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0063] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0064] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0064] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0065] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0065] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0066] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0066] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0067] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0067] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0068] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0068] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0070] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0070] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0071] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0071] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0072] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0072] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0073] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0073] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0074] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0074] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0075] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0076] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0077] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0077] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0078] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0078] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0079] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0079] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0080] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0080] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0081] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0082] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0083] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0083] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0084] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0084] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0085] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0085] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0086] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0087] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0088] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0088] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0089] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0089] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0090] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0090] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0091] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0091] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0092] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0092] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0093] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0094] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0095] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0095] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0096] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0096] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0097] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0097] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0098] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0098] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0099] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0100] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0101] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0101] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0102] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0102] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0103] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0103] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0104] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0104] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0105] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0106] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0107] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0107] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0108] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0108] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0109] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0109] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0110] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0110] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0111] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0111] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0112] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0113] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0114] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0114] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0115] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0115] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0116] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0116] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0117] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0117] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0118] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0119] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0120] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0120] Command output: [Waiting for deployment spec update to be observed...
]
DEBU[0121] Running command: [kubectl --context minikube rollout status deployment admin-new-deployment-dev --namespace default --watch=false]
DEBU[0121] Command output: [Waiting for deployment spec update to be observed...
]
- deployment/admin-new-deployment-dev: could not stabilize within 2m0s
- deployment/admin-new-deployment-dev failed. Error: could not stabilize within 2m0s.
Cleaning up...
DEBU[0122] Running command: [kubectl --context minikube create --dry-run -oyaml -f /Users/eox-dev/Projects/current/company/company-app/manifests/dev/ingress.yaml -f /Users/eox-dev/Projects/current/company/company-app/manifests/dev/admin-new.yaml]
DEBU[0122] Command output: [apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
name: ingress-service-dev
namespace: default
spec:
rules:
- http:
paths:
- backend:
serviceName: admin-new-cluster-ip-service-dev
servicePort: 4001
path: /admin/?(.*)
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: admin-new-deployment-dev
namespace: default
spec:
replicas: 1
selector:
matchLabels:
component: admin-new
environment: development
template:
metadata:
labels:
component: admin-new
environment: development
spec:
containers:
- env:
- name: PGUSER
valueFrom:
secretKeyRef:
key: PGUSER
name: company-app-dev-secrets
- name: PGHOST
value: postgres-cluster-ip-service-dev
- name: PGPORT
value: "1423"
- name: PGDATABASE
valueFrom:
secretKeyRef:
key: PGDATABASE
name: company-app-dev-secrets
- name: PGPASSWORD
valueFrom:
secretKeyRef:
key: PGPASSWORD
name: company-app-dev-secrets
- name: SECRET_KEY
valueFrom:
secretKeyRef:
key: SECRET_KEY
name: company-app-dev-secrets
- name: SENDGRID_API_KEY
valueFrom:
secretKeyRef:
key: SENDGRID_API_KEY
name: company-app-dev-secrets
- name: DOMAIN
valueFrom:
secretKeyRef:
key: DOMAIN
name: company-app-dev-secrets
- name: DEBUG
valueFrom:
secretKeyRef:
key: DEBUG
name: company-app-dev-secrets
image: companyappacr.azurecr.io/company-app-admin-new
name: admin-new
ports:
- containerPort: 4001
volumeMounts:
- mountPath: /mnt/company-files/client-submissions
name: file-storage-dev
subPath: client-submissions
- mountPath: /mnt/company-files/client-downloads
name: file-storage-dev
subPath: client-downloads
- mountPath: /mnt/company-files/client-logos
name: file-storage-dev
subPath: client-logos
- mountPath: /mnt/company-files/xls-exports
name: file-storage-dev
subPath: xls-exports
- mountPath: /mnt/company-files/xls-imports
name: file-storage-dev
subPath: xls-imports
volumes:
- name: file-storage-dev
persistentVolumeClaim:
claimName: file-storage-dev
---
apiVersion: v1
kind: Service
metadata:
name: admin-new-cluster-ip-service-dev
namespace: default
spec:
ports:
- port: 4001
targetPort: 4001
selector:
component: admin-new
environment: development
type: ClusterIP
]
DEBU[0122] Running command: [kubectl --context minikube delete --ignore-not-found=true -f -]
- ingress.networking.k8s.io "ingress-service-dev" deleted
- deployment.apps "admin-new-deployment-dev" deleted
- service "admin-new-cluster-ip-service-dev" deleted
INFO[0122] Cleanup complete in 251.708689ms
exiting dev mode because first deploy failed: 1/1 deployment(s) failed
@eox-dev that Waiting for deployment spec update to be observed...
message is really odd. Could you try running kubectl get all --all-namespaces
and let's see what resources you have hanging around? And then when running skaffold dev
and while waiting on the deployment, could you try kubectl describe deployment.apps/admin-new-deployment-dev
and see what it reports?
From some searching, your issue sounds an awful lot like https://github.com/kubernetes/kubernetes/issues/36117. Which seems odd considering you're running on minikube. In that issue, a cause is having too many replicasets around — they aren't being cleaned up.
Here's a few things to try:
kubectl delete
them. Or set revisionHistoryLimit: 5
on your deployment and then try re-deploying.minikube stop && minikube start
If you haven't already used it, k9s
is a nifty tool for examining your cluster.
@briandealwis
Here is the --all-namespaces
without my cluster running:
$ kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-5644d7b6d9-4sh2x 1/1 Running 3 5d23h
kube-system pod/etcd-minikube 1/1 Running 3 5d23h
kube-system pod/ingress-nginx-admission-create-p26nz 0/1 Completed 0 5d23h
kube-system pod/ingress-nginx-admission-patch-z64r6 0/1 Completed 0 5d23h
kube-system pod/ingress-nginx-controller-744d74fc9c-mbms6 1/1 Running 4 5d23h
kube-system pod/kube-apiserver-minikube 1/1 Running 3 5d23h
kube-system pod/kube-controller-manager-minikube 0/1 CrashLoopBackOff 111 5d23h
kube-system pod/kube-proxy-tqdxl 1/1 Running 3 5d23h
kube-system pod/kube-scheduler-minikube 0/1 CrashLoopBackOff 110 5d23h
kube-system pod/storage-provisioner 1/1 Running 7 5d23h
kubernetes-dashboard pod/dashboard-metrics-scraper-b68468655-ksnhp 1/1 Running 3 5d23h
kubernetes-dashboard pod/kubernetes-dashboard-7b65b89587-qn625 1/1 Running 3 5d23h
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d23h
kube-system service/ingress-nginx-controller-admission ClusterIP 10.105.5.233 <none> 443/TCP 5d23h
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 5d23h
kubernetes-dashboard service/dashboard-metrics-scraper ClusterIP 10.105.181.6 <none> 8000/TCP 5d23h
kubernetes-dashboard service/kubernetes-dashboard ClusterIP 10.97.159.201 <none> 80/TCP 5d23h
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/kube-proxy 1 1 1 1 1 beta.kubernetes.io/os=linux 5d23h
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/coredns 1/1 1 1 5d23h
kube-system deployment.apps/ingress-nginx-controller 1/1 1 1 5d23h
kubernetes-dashboard deployment.apps/dashboard-metrics-scraper 1/1 1 1 5d23h
kubernetes-dashboard deployment.apps/kubernetes-dashboard 1/1 1 1 5d23h
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/coredns-5644d7b6d9 1 1 1 5d23h
kube-system replicaset.apps/ingress-nginx-controller-744d74fc9c 1 1 1 5d23h
kubernetes-dashboard replicaset.apps/dashboard-metrics-scraper-b68468655 1 1 1 5d23h
kubernetes-dashboard replicaset.apps/kubernetes-dashboard-7b65b89587 1 1 1 5d23h
NAMESPACE NAME COMPLETIONS DURATION AGE
kube-system job.batch/ingress-nginx-admission-create 1/1 6s 5d23h
kube-system job.batch/ingress-nginx-admission-patch 1/1 8s 5d23h
As far as when it is booting up it hands at this for almost two minutes:
$ kubectl describe deployment.apps/admin-new-deployment-dev
Name: admin-new-deployment-dev
Namespace: default
CreationTimestamp: Wed, 30 Sep 2020 09:36:28 -0700
Labels: app.kubernetes.io/managed-by=skaffold
skaffold.dev/run-id=3324dec8-4bdc-4261-8bdb-dae166d0d0c6
Annotations: Selector: component=admin-new,environment=development
Replicas: 1 desired | 0 updated | 0 total | 0 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app.kubernetes.io/managed-by=skaffold
component=admin-new
environment=development
skaffold.dev/run-id=3324dec8-4bdc-4261-8bdb-dae166d0d0c6
Containers:
admin-new:
Image: -companyappacr.azurecr.io/-company-app-admin-new:9afdcf3a92411f603eaf5886fb73e7354e0160489511e851cc304c20d3233a28
Port: 4001/TCP
Host Port: 0/TCP
Environment:
PGUSER: <set to the key 'PGUSER' in secret '-company-app-dev-secrets'> Optional: false
PGHOST: postgres-cluster-ip-service-dev
PGPORT: 1423
PGDATABASE: <set to the key 'PGDATABASE' in secret '-company-app-dev-secrets'> Optional: false
PGPASSWORD: <set to the key 'PGPASSWORD' in secret '-company-app-dev-secrets'> Optional: false
SECRET_KEY: <set to the key 'SECRET_KEY' in secret '-company-app-dev-secrets'> Optional: false
SENDGRID_API_KEY: <set to the key 'SENDGRID_API_KEY' in secret '-company-app-dev-secrets'> Optional: false
DOMAIN: <set to the key 'DOMAIN' in secret '-company-app-dev-secrets'> Optional: false
DEBUG: <set to the key 'DEBUG' in secret '-company-app-dev-secrets'> Optional: false
Mounts:
/mnt/-company-files/client-downloads from file-storage-dev (rw,path="client-downloads")
/mnt/-company-files/client-logos from file-storage-dev (rw,path="client-logos")
/mnt/-company-files/client-submissions from file-storage-dev (rw,path="client-submissions")
/mnt/-company-files/xls-exports from file-storage-dev (rw,path="xls-exports")
/mnt/-company-files/xls-imports from file-storage-dev (rw,path="xls-imports")
Volumes:
file-storage-dev:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: file-storage-dev
ReadOnly: false
OldReplicaSets: <none>
NewReplicaSet: <none>
Events: <none>
Then the last 10 seconds or so it changes to this:
$ kubectl describe deployment.apps/admin-new-deployment-dev
Name: admin-new-deployment-dev
Namespace: default
CreationTimestamp: Wed, 30 Sep 2020 09:36:28 -0700
Labels: app.kubernetes.io/managed-by=skaffold
skaffold.dev/run-id=3324dec8-4bdc-4261-8bdb-dae166d0d0c6
Annotations: deployment.kubernetes.io/revision: 1
Selector: component=admin-new,environment=development
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app.kubernetes.io/managed-by=skaffold
component=admin-new
environment=development
skaffold.dev/run-id=3324dec8-4bdc-4261-8bdb-dae166d0d0c6
Containers:
admin-new:
Image: companyappacr.azurecr.io/company-app-admin-new:9afdcf3a92411f603eaf5886fb73e7354e0160489511e851cc304c20d3233a28
Port: 4001/TCP
Host Port: 0/TCP
Environment:
PGUSER: <set to the key 'PGUSER' in secret 'company-app-dev-secrets'> Optional: false
PGHOST: postgres-cluster-ip-service-dev
PGPORT: 1423
PGDATABASE: <set to the key 'PGDATABASE' in secret 'company-app-dev-secrets'> Optional: false
PGPASSWORD: <set to the key 'PGPASSWORD' in secret 'company-app-dev-secrets'> Optional: false
SECRET_KEY: <set to the key 'SECRET_KEY' in secret 'company-app-dev-secrets'> Optional: false
SENDGRID_API_KEY: <set to the key 'SENDGRID_API_KEY' in secret 'company-app-dev-secrets'> Optional: false
DOMAIN: <set to the key 'DOMAIN' in secret 'company-app-dev-secrets'> Optional: false
DEBUG: <set to the key 'DEBUG' in secret 'company-app-dev-secrets'> Optional: false
Mounts:
/mnt/company-files/client-downloads from file-storage-dev (rw,path="client-downloads")
/mnt/company-files/client-logos from file-storage-dev (rw,path="client-logos")
/mnt/company-files/client-submissions from file-storage-dev (rw,path="client-submissions")
/mnt/company-files/xls-exports from file-storage-dev (rw,path="xls-exports")
/mnt/company-files/xls-imports from file-storage-dev (rw,path="xls-imports")
Volumes:
file-storage-dev:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: file-storage-dev
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Available False MinimumReplicasUnavailable
Progressing True ReplicaSetUpdated
OldReplicaSets: admin-new-deployment-dev-5fb9f6b8cd (1/1 replicas created)
NewReplicaSet: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 28s deployment-controller Scaled up replica set admin-new-deployment-dev-5fb9f6b8cd to 1
Then it fails.
I've been keeping an eye on minikube dashboard
and doesn't look like an issue with old replica sets hanging around as they are usually destroyed relatively quickly when stop the cluster.
I have tried minikube delete
and recreating it a few times, but the issue persists. I'll give it another shot though. The only part of my PV that needs to persist is the DB files, but it is just a dev copy that takes a few minutes to load up from the Dockerfile.dev
for the initial deployment.
I haven't used k9s
, I'll check it out to see if it helps to resolve this issue.
Could this possibly be an issue with Docker Desktop for Mac? I ask because it is particularly bad on Mac, while it isn't much of an issue when I try on Linux.
Actually, the past several times I've managed to get a cluster to start and then CTRL + C
, the Pods and Replica Sets are still running 5 minutes after I stopped Skaffold. I did try to skaffold delete
, but they are still up.
Looks like they finally were destroyed about 10 minutes after stopping.
Willing to bet I'm just starting a new cluster while these are waiting to be destroyed.
Your hypothesis, that there is an existing replicaset, sounds plausible, though your report of ‘kubectl get all —all-namespaces‘ doesn’t show that.
I think you'll need to dig into the replicasets and pods using ‘kubectl describe‘ and try to figure out what's causing the underlying replicaset to give up. I’m really curious to know what's the cause.
@eox-dev were you able to find any more information?
Expected behavior
Running
skaffold dev --port-forward
orskaffold dev --port-forward --status-check=false
should start-up the cluster.Actual behavior
As of the two weeks, or so, I started getting
could not stabilize with 2m0s
when I needed to restart the cluster. Seems like 4 out of 5 times I would get it.Started running with
--status-check-false
and I'm getting a bunch of:CTRL+C
, retry a few times, and eventually things start up normally.Checking
minikube dashboard
and for a while it seemed like it was taking upwards of a minute for Pods, Deployments, Serviecs to be destroyed, but I've looked at it about a dozen times and now they are being destroyed within a few seconds ofCTRL + C
.Information