DevSpace - The Fastest Developer Tool for Kubernetes ⚡ Automate your deployment workflow with DevSpace and develop software directly inside Kubernetes.
If I rerun devspace dev --debug a second time, it succeeds and sync works, but it still displays the error:
12:05:19 dev:my-app Waiting for pod to become ready...
12:05:19 dev:my-app Start selecting a single container with selector label selector: app=my-app
12:05:20 dev:my-app Selected pod my-app-d5f477dc9-llnqd
12:05:20 dev:my-app sync Start selecting a single container with selector pod name: my-app-d5f477dc9-llnqd
12:05:20 dev:my-app sync Starting sync...
12:05:21 dev:my-app sync Start syncing
12:05:21 dev:my-app sync Sync started on: ./my_app <-> /my-app/my_app
12:05:21 dev:my-app sync Waiting for initial sync to complete
12:05:21 dev:my-app sync Initial Sync - Retrieve Initial State
12:05:21 dev:my-app sync Downstream - Start collecting changes
12:05:21 dev:my-app sync Helper - Use inotify as watching method in container
12:05:21 dev:my-app sync Downstream - Done collecting changes
12:05:21 dev:my-app sync Initial Sync - Done Retrieving Initial State
12:05:21 dev:my-app sync Initial Sync - Calculate Delta from Remote State
12:05:21 dev:my-app sync Initial Sync - Done Calculating Delta (Download: 0, Upload: 0)
12:05:21 dev:my-app sync Downstream - Initial sync completed
12:05:21 dev:my-app sync Upstream - Initial sync completed
12:05:21 dev:my-app sync Initial sync took: 622.089548ms
12:05:21 dev:my-app sync Initial sync completed
12:05:21 debug Wait for dev to finish
12:05:22 dev:my-app sync Sync Error on /home/vpatov/mycompany/my-app/my_app: upstream: exec after initial sync: rpc error: code = Unknown desc = Error executing command 'touch /.devspace/start': touch: cannot touch '/.devspace/start': No such file or directory
=> exit status 1
12:05:22 dev:my-app sync Sync stopped
12:05:22 dev:my-app sync Restarting because: upstream: exec after initial sync: rpc error: code = Unknown desc = Error executing command 'touch /.devspace/start': touch: cannot touch '/.devspace/start': No such file or directory
=> exit status 1
12:05:22 dev:my-app sync Start selecting a single container with selector pod name: my-app-d5f477dc9-llnqd
12:05:22 dev:my-app sync Helper - Streams are closed
12:05:22 dev:my-app sync Helper - Streams are closed
12:05:22 dev:my-app sync Starting sync...
12:05:23 dev:my-app sync Start syncing
12:05:23 dev:my-app sync Sync started on: ./my_app <-> /my-app/my_app
12:05:23 dev:my-app sync Waiting for initial sync to complete
12:05:23 dev:my-app sync Initial Sync - Retrieve Initial State
12:05:23 dev:my-app sync Downstream - Start collecting changes
12:05:23 dev:my-app sync Helper - Use inotify as watching method in container
12:05:23 dev:my-app sync Downstream - Done collecting changes
12:05:23 dev:my-app sync Initial Sync - Done Retrieving Initial State
12:05:23 dev:my-app sync Initial Sync - Calculate Delta from Remote State
12:05:23 dev:my-app sync Initial Sync - Done Calculating Delta (Download: 0, Upload: 0)
12:05:23 dev:my-app sync Downstream - Initial sync completed
12:05:23 dev:my-app sync Upstream - Initial sync completed
12:05:23 dev:my-app sync Initial sync took: 637.195352ms
12:05:23 dev:my-app sync Initial sync completed
What did you expect to happen instead?
I expect devspace dev to work the first time around
How can we reproduce the bug? (as minimally and precisely as possible)
My devspace.yaml:
version: v2beta1
name: my-app
# metallb let's us access the cluster's services from our local machine
dependencies:
metallb:
path: ./hack/metallb
namespace: metallb-system
images:
my-app:
image: my-app-local-image
target: production
buildKit:
args:
- "--ssh"
- "default=${SSH_AUTH_SOCK}"
commands:
prep:
description: "Setup the cluster for development. This only needs to be run once per cluster."
command: |-
# Update helm repos
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add cert-manager https://charts.jetstack.io
helm repo update
# Install helm dependency apps.
helm upgrade cert-manager cert-manager/cert-manager --namespace cert-manager --create-namespace --set crds.enabled=true --install
pipelines:
dev:
run: |-
run_dependencies metallb
kubectl apply -f ./hack/my-app/namespace.yaml
# Use dev secrets so that we don't need externalsecrets / doppler
kubectl apply -f ./hack/my-app/dev-secrets.yaml
build_images my-app
create_deployments my-app
SERVICE_IP=$(kubectl get svc my-app-kubernetes-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
# TODO this is hack. Rather than add the metallb IP to /etc/hosts, we should use cloudflare DNS.
sudo sed -i '/console\.dev\.mycompany\.com/d' /etc/hosts
echo "${SERVICE_IP} console.dev.mycompany.com" | sudo tee -a /etc/hosts
start_dev my-app
deploy:
run: |-
# TODO figure out how to install metallb as part of devspace run prep, while still using
# dependency syntax/structure (convenient to be able to invoke it this way).
# It only needs to happen once per cluster, and it doesn't need to be torn down and rebuilt
# for every cloud console deploy.
run_dependencies metallb
kubectl apply -f ./hack/my-app/namespace.yaml
# Use dev secrets so that we don't need externalsecrets / doppler
kubectl apply -f ./hack/my-app/dev-secrets.yaml
build_images my-app
create_deployments my-app
SERVICE_IP=$(kubectl get svc my-app-kubernetes-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
# TODO this is hack. Rather than add the metallb IP to /etc/hosts, we should use cloudflare DNS.
sudo sed -i '/console\.dev\.mycompany\.com/d' /etc/hosts
echo "${SERVICE_IP} console.dev.mycompany.com" | sudo tee -a /etc/hosts
deployments:
my-app:
helm:
releaseName: my-app
chart:
path: ../local-iac/my-app/
valuesFiles:
- ../local-iac/my-app/values.yaml
- ../local-iac/my-app/values-dev-kind.yaml
values:
image:
tag: ${runtime.images.my-app.tag}
repository: ${runtime.images.my-app.image}
dev:
my-app:
labelSelector:
app: my-app
namespace: my-app
container: my-app
sync:
- path: ./my_app:/my-app/my_app
startContainer: true
Local Environment:
DevSpace Version: DevSpace version : 6.3.14
Operating System: Ubuntu 22.04.5 LTS
ARCH of the OS: x86_64Kubernetes Cluster:
Cloud Provider: kind v0.25.0 go1.22.9 linux/amd64
Docker:
Client: Docker Engine - Community
Version: 27.3.1
API version: 1.47
Go version: go1.22.7
Git commit: ce12230
Built: Fri Sep 20 11:41:00 2024
OS/Arch: linux/amd64
Context: default
Server: Docker Engine - Community
Engine:
Version: 27.3.1
API version: 1.47 (minimum version 1.24)
Go version: go1.22.7
Git commit: 41ca978
- Kubernetes Version:
Client Version: v1.30.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.0
**Anything else we need to know?**
We started using devspace very recently, so we are flexible with how we configure things. It's very possible I am simply misusing/misunderstanding the config. Any help would be appreciated, thank you!
What happened?
Running
devspace dev
results in an error and stops, because ofFirst run
Here is the output of the initial run of
devspace dev --debug
:Second run (works)
If I rerun
devspace dev --debug
a second time, it succeeds and sync works, but it still displays the error:What did you expect to happen instead?
I expect
devspace dev
to work the first time aroundHow can we reproduce the bug? (as minimally and precisely as possible)
My devspace.yaml:
Local Environment:
DevSpace version : 6.3.14
Ubuntu 22.04.5 LTS
x86_64
Kubernetes Cluster:kind v0.25.0 go1.22.9 linux/amd64
Server: Docker Engine - Community Engine: Version: 27.3.1 API version: 1.47 (minimum version 1.24) Go version: go1.22.7 Git commit: 41ca978
Client Version: v1.30.1 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.29.0