Closed SohumB closed 2 years ago
@SohumB thanks for creating this issue! The problem is in the documentation, if you have a single container you can do:
dev:
app:
labelSelector:
app.kubernetes.io/component: app
container: container-0
devImage: ghcr.io/loft-sh/devspace-containers/alpine:3
terminal:
command: bash --norc
ssh:
enabled: true
and if you have multiple containers you need to do:
version: v2beta1
name: app
pipelines:
dev:
run: |-
create_deployments --all
start_dev app
deployments:
app:
helm:
chart:
name: component-chart
repo: https://charts.devspace.sh
values:
containers:
- image: registry.gitlab.com/inaccessible/image
dev:
app:
labelSelector:
app.kubernetes.io/component: app
containers:
container-0:
devImage: ghcr.io/loft-sh/devspace-containers/alpine:3
terminal:
command: bash --norc
ssh:
enabled: true
Ah, I see! Thank you for the help!
What happened?
Under normal circumstances, if you provide a chart that renders to an image that's inaccessible, devspace will install that chart, then check any devImage replacements that needs to be done, then check if the pod is ready. When a
labelSelector
is specified with acontainers:
entry (as opposed to just assuming there's a single container), the latter two steps seem to be reversed, which means devspace waits forever on theImagePullBackOff
errors on the inaccessible image.What did you expect to happen instead?
That to not happen :)
How can we reproduce the bug? (as minimally and precisely as possible)
My devspace.yaml:
If you try this
devspace.yaml
with animageSelector
, or with alabelSelector
but nocontainers:
map, it succeeds.Local Environment:
devspace version 6.0.0-beta.4
/kind bug