Closed diegodelemos closed 4 years ago
I would prefer 1 over 2. However, we can tackle this problem together with another one: if we rebuild say REANA 0.6.0 two moths after release, the produced docker images are not the same anymore, because we are using relax version constraints like:
$ grep tablib setup.py
'tablib>=0.12.1,<0.13',
which can produce images once with tablib 0.12.3, once with 0.12.8, based on which version is available.
This is not good for reproducibility :wink: and we are loosing a lot of time hunting dependencies.
Hence proposal:
tablib==0.12.7
, be it in setup.py
or in requirements.txt
, and use the fully pinned list to produce docker images or release Python client on PyPI; pip compile
and pyup to have security updates;reana-server
that depends on reana-db
from 0.7.0.dev20200427
to 0.7.0.dev20200520
, we'd naturally have to edit requirements.txt
, so its source code will change as well and the problem described in this issue (package A changing, package B not changing) would not occur.WDYT?
For illustration, here is a list of things to do for a cluster package such as reana-workflow-controller
in order to move to a pip-compile-based release model:
pip-tools
;setup.py
; e.g. keep lowest versions, but remove upper boundaries;pip-compile
to generate requirements.txt
and add it to the repo; could be done by hand;Dockerfile
so that requirements.txt
is taken into account when building image;pip-compile
into Travis CI workflows, I think;requirements-builder
only for those packages that are user-facing and that are released on PyPI, e.g. r-client
; all cluster packages will be "pinned" via pip-compile
and requirements.txt
;r-commons
or r-db
, then we have to edit each cluster component to increase minimal version in its setup.py
to the given new wanted value; we then rerun pip-compile
for this change -- beware if this might bring new deps! we could just as well edit only r-commons and r-db versions, and leave general update for later; see below -- this will lead to bulk-releasing of all cluster components; we may want to script this in reana-dev
;pip-compile
in all packages, in order to bring up latest updates of say tablib
, and we run CI tests carefully. pip compile
has a notion of "packages" that can help with that, e.g. update only reana package and not global package, etc.All REANA cluster components have been moved to use the new pip-compile based freezing.
All REANA client and shared components will remain non-freezed so that users can install them on a variety of existing systems, perhaps into their existing environments.
This issue can therefore be closed.
Because of re-pushing an image with the same tag, the following problem has happened in a QA deployment (the re-push happens if one follows our current docs and our current practices):
Let us illustrate it with two deployments
DEP1
andDEP2
:DEP1
(execution ofhelm install/upgrade
)DEP2
reana-0.7.0-dev20200420
reanahub/reana-workflow-controller:0.6.0-36-gb702986
with sha256aaabbb
reana-db==0.7.0.dev20200427
DEP2
(execution ofhelm install/upgrade
)DEP1
reana-0.7.0-dev20200527
reanahub/reana-workflow-controller:0.6.0-36-gb702986
with sha256cccddd
reana-db==0.7.0.dev20200520
0.6.0-36-gb702986
) but a new version ofreana-db
was fetched while re-building and re-pushing images producing a new image with sha256cccddd
This will cause
DEP2
to not work because:reanahub/reana-workflow-controller:0.6.0-36-gb702986
(sha256aaabbb
) present so a re-pull will never be triggered (we use the defaultImagePullPolicy: IfNotPresent
)reanahub/reana-workflow-controller
image that will be used will beaaabbb
instead ofcccddd
which is the onereana-0.7.0-dev20200527
expects with the correctreana-db
versionSolutions
reana-db
,reana-commons
etc..), therefore the tag will change (in our example from0.6.0-36-gb702986
to0.6.0-37-gc16d3332
).aaabbb
is used in production and a re-location of the pod from one node to another one triggers a new docker pull, causing production to run all of a suddenbbbccc
which might break it.ImagePullPolicy: Always
for infrastructure pods.Note: Even though closely related to https://github.com/reanahub/reana/issues/248 this issue is different since it deals with infrastructure pods (RS, RWC) rather than with runtime pods.