Open noizwaves opened 3 years ago
@noizwaves thanks for opening this issue and the detailed description! Thats actually at the moment expected behaviour, since the dev.sync.labelSelector
and dev.sync.imageName
are combined by a logical OR and not an AND, which means DevSpace will try to find all containers that fullfill either the labelSelector or the imageName requirement and then order them by creation timestamp and select the newest.
Furthermore, containerName
does not apply to imageName
and only to labelSelector
, which in your case leads to the selection of the hi-worker
pods as well to the hi-web
pods. I see however, that in order for you to use the preferSyncOverRebuild
feature, you currently need to use imageName
.
After some internal discussion, we think a good solution would be to introduce a new config option called images.*.rebuildStrategy
that could take several options such as always
(which solves another issue of forcing devspace to always rebuild an image), default
and ignoreContextChanges
(which wouldn't count in docker context changes to determine if a rebuild should happen), which in your case you could just do rebuildStrategy: ignoreContextChanges
which essentially behaves exactly as preferSyncOverRebuild
with the distinction that you do not need to specify an imageName
in the sync configuration. The option would also allow us to introduce even more sophisticated rebuild strategies in future. What do you think about this approach, would that solve your issue?
Many thanks for the response @FabianKramm! Your explanation for the current behavior makes sense.
Reading over your proposal, I think it would address our issues. I can explain our motivation and thinking behind why we were exploring preferSyncOverRebuild
:
When preparing for a demo, we were doing many repeated cycles of "start devspace dev; change files; stop devspace dev". The files we were changing were tests and app source code. On the second repetition of this cycle, we noticed that these changes were being detected as context changes an were triggering an image rebuild. We know that these files (because they're not the
Dockerfile
,Gemfile
, orpackage.json
) don't affect the image contents, so we started exploring solutions to skip the image build.
I suspect the rebuildStrategy: ignoreContextChanges
would allow us to achieve this level of control. For our current state of optimizations, an initial sync of changes to a dated image is orders of magnitude faster than a cached kaniko build. Being able to defer context diffs to an initial sync would be great! At least for source code changes.
For changes of certain files (I'm thinking package.json
or Gemfile
for example) we'd like to be able to do a sync and then some action. Conceptually, it is known that when the package.json changes, you should run yarn install
(this is what our engineers have to do manually right now). It would be great if we could declare somewhere in the devspace.yaml that "when the package.json is uploaded to the container, run yarn install
".
We haven't examined the currently available options in devspace docs yet. Looking quickly over the docs for hooks and sync, this kind of "file upload specific" hook isn't available. Keen to hear your thoughts on this!
@noizwaves That's good additional context. I wanted to add the following regarding your quote: Did you notice that devspace checks the .dockerignore file? If you are excluding files from the build context using .dockerignore, they are ignored during image building.
In general, I still think we need rebuildStrategy because that would address other cases as well and makes the build behavior a bit more transparent (since each strategy will be documented as well).
I also think that there are two cases to consider when building an image once and then using it in multiple containers:
Thanks @LukasGentele, that all sounds good. On the .dockerignore
, we are aware of that, and will take another look at the settings.
In terms of the cases you've outlined:
Sync should hit 1 container only
This case makes most sense to us.
Sync should hit n containers
We have not hit this requirement yet, and I'm not sure we will. We've been able to define multiple sync
rules (with different logic) to achieve our desired sync configuration.
What happened? When adding the
dev.sync.imageName
configuration to gain preferred sync over rebuild behavior to an orchestration where one image is deployed to many pods (in this case, theapp
image is used to run 3 containers across 2 pods),devspace
selects the wrong destination pod as the sync target.The deployment manifest looks like this:
Before making the changes, the contents of
devspace.yaml
:Before making the changes, the output of
devspace dev
reveals:After making the changes, the contents of
devspace.yaml
:After making the changes, the output of
devspace dev
reveals syncing toadam-neumann/hi-worker-5fb566c78-nxcp5
now instead ofadam-neumann/hi-web-656bb7f578-mvmxc
:What did you expect to happen instead?
I was expecting the destination pod selected for file syncing to remain as
adam-neumann/hi-web-656bb7f578-mvmxc
.How can we reproduce the bug? (as minimally and precisely as possible)
The information above should be enough to reproduce it.
Local Environment:
devspace version 5.9.1
Kubernetes Cluster:
Anything else we need to know?