Open wayspurrchen opened 3 years ago
Small update: after removing the terminal
config and using the dev.sync.localSubPath
, I now correctly get through to the log streaming with devspace dev
but still experience the same issues with the 0/0 front-app
Deployment and the last pod not getting deleted with devspace purge
.
@wayspurrchen thanks for creating this issue! The -devspace
pod you are seeing is the result of using dev.replacePods
which will scale down a deployment / replicaset / statefulset and create a custom pod that used for development instead which mirrors the settings and labels of the original pod from the deployment / replicaset / statefulset. You can checkout replacePods docs for more in depth explanation.
I agree that devspace purge
should delete those pods as well, we can add that, currently devspace reset pods
is required to delete those. devspace reset pods
would also reset the deployment if you run it without devspace purge
. In general, you don't have to use the dev.replacePods
option, you can also directly target the pods from the deployment via the sync or portforwarding, thats up to you, however we think it simplifies certain workflows.
Interesting, that's good to know! I was under the impression that I had to use replacePods
to get syncing/etc. capabilities but great to hear that's not the case. Testing this same workflow without replacePods
works exactly as expected. Thank you very much!
What happened?
I am having a bevy of small issues but I'm not familiar enough with devspaces to tell which are bugs, intended behavior, or user error. Regardless, there may be opportunities for documentation or modifications from my difficulties.
Context: I am attempting to use devspaces as part of my evaluation of service meshes, starting with Istio. I have a simple application that is configured to present a few HTTP routes and hit one of two other identical applications. These are simple Express apps exposed over port 80, but only the Deployment named
front-app
is exposed to Istio Ingress via its Gateway/VirtualService mechanism (back-app-1
andback-app-2
are not).I am using
labelSelector.app
andcontainerName
to specify thefront-app
Pod as well as my sample application container, also namedfront-app
, since there is also the Istio sidecar container present.See service topography below:
The services are red because they are hardcoded to randomly throw errors for demonstration of error-tracking during tracing, so that is expected.
Running
devspace dev
shows this output:The terminal session inside the
front-app-7c59885f69-vbwhp-devspace
pod has quit by this point, implying a restart, so I am back at my home terminal.Running
kubectl get pod
shows thefront-app
has had its pod replaced, and shows one restart:kubectl logs -f front-app-7c59885f69-vbwhp-devspace --previous
shows that the container entered a crash loop due to apparently missing application files:This error does not show up for the other pods
back-app-1-...
andback-app-2-...
, and all three deployments/pods share the same Dockerfile, code, and built image residing on Google Container Registry, which is what makes me suspect this is related to devspaces.Aside: During this bug report I realized that I did not have
devspace.yaml
'sdev.sync.localSubPath
option set, as my application is in a folder namedapp
. I updated my config with the appropriate folder and no longer found the restart issue, but then encountered this error upon runningdevspace dev
:I figured since I am now in a subdirectory I should update
terminal.command
to be../devspace_start.sh
but still received the same above error, except with../devspace_start.sh
instead of./devspace_start.sh
.However, the pods do appear to have been properly deployed without restart:
But other issues experienced still persist.
Regardless, the
front-app-7c59885f69-vbwhp-devspace
pod IS running and I can connect to it withkubectl logs -f front-app-7c59885f69-vbwhp-devspace
, and hit the app via the external IP on a web browser and exercise its endpoints which shows up in the logs:Running
kubectl get deployment
shows that thefront-app
Deployment is not aware of the devspace pod:Other commands like
devspace enter
,devspace sync
anddevspace ui
work like a charm:Running
devspace purge
appears to indicate that everything worked as expected:However, running
kubectl get pod
shows thefront-app
still running:Logically, it's inaccessible as everything else, including Deployments, Gateways and VirtualServices have been cleaned up.
What did you expect to happen instead?
devspace purge
to delete the replaced podHow can we reproduce the bug? (as minimally and precisely as possible)
I have prepared a small repro repo here: https://github.com/wayspurrchen/devspaces-istio-repro
For convenience, this is the compiled kubernetes output that Helm applies, generated with
helm template --debug helm/smt-app
:Commands
Local Environment:
Kubernetes Cluster:
Anything else we need to know?
Running
devspace sync
from my project directory appears to not utilize thedevspace.yaml
sync settings, resulting in all of the files inapp
getting remote-synced into my local top-level directory. Running it with a command flag or moving to theapp
directory is easy, but it would be nice if the command automatically used the configuration file to know where to sync to.Apologies for the exhaustive detail and thank you very much for what seems like a very useful and powerful tool once I figure out how to use it properly :)
/kind bug