Closed stweil closed 1 year ago
make check
aborts with these errors:ocrd-anybaseocr 1.6.0 has requirement keras-preprocessing==1.1.0, but you have keras-preprocessing 1.1.2. ocrd-anybaseocr 1.6.0 has requirement scipy==1.4.1, but you have scipy 1.6.3. ocrd-anybaseocr 1.6.0 has requirement tensorflow<2.2.0,>=2.1.0, but you have tensorflow 2.0.4.
These are worrying, @SB2020-eye reported them as well in the chat but I cannot reproduce. Can you try it with a fresh checkout of ocrd_all, this might be an artefact of the previous versions?
make all
passes, but the build protocol shows lots of these conflicts:ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. ocrd 2.24.0 requires ocrd-utils==2.24.0, but you have ocrd-utils 2.23.3 which is incompatible. ocrd-validators 2.24.0 requires ocrd-utils==2.24.0, but you have ocrd-utils 2.23.3 which is incompatible. ocrd-models 2.24.0 requires ocrd-utils==2.24.0, but you have ocrd-utils 2.23.3 which is incompatible. ocrd-modelfactory 2.24.0 requires ocrd-utils==2.24.0, but you have ocrd-utils 2.23.3 which is incompatible.
These are expected when upgrading, core
will be updated during the make all
process (from 2.23.3
to 2.24.0
) and it should reach a consistent state eventually.
Can you try it with a fresh checkout of ocrd_all
Build is running, I'll report the result here ...
22 minutes later: For a fresh clone of ocrd_all
, make all
and make check
both pass without an error.
22 minutes later: For a fresh clone of
ocrd_all
,make all
andmake check
both pass without an error.
that is a relief :)
But we obviously need a better way to upgrade. What I tend to recommend to users in case of conflicts is basically
rm venv/bin/ocrd-*
make all
What I do sometimes after developing and changing packages etc. manually is to just rm venv
and then reinstall again to have a clean install and the same "clean slate" like on CircleCI.
Maybe we should have a make clean-upgrade
command that removes all installed executables, removes all sub-venvs and reinstalls everything?
I am afraid that won't help. I forgot to mention that I started with a new virtual environment in my first test. So we might have this situation:
make all check
works when it is started without an activated virtual environment (=> building a new venv
)make all check
fails when it is started with an activated fresh virtual environmentI'll check both variants again.
2020-05-29: My tests did not find evidence for those claims.
This issue seems to be a random one. I now have two different virtual environments (both made in a new build) for the same source tree, one working (venv-20210428t), one failing (venv-20210428). There are differences in their Python module lists, the "bad" one has a wrong old version of tensorflow
, and the "good" one has an old version of ocrd
(so it is not really good):
--- ../venv-20210428/local/sub-venv/headless-tf21/headless-tf21.list 2021-05-29 08:36:42.725256035 +0200
+++ ../venv-20210428t/local/sub-venv/headless-tf21/headless-tf21.list 2021-05-29 08:36:57.285263565 +0200
@@ -27,14 +27,14 @@
h5py 2.10.0
idna 2.10
imageio 2.9.0
-importlib-metadata 4.3.0
+importlib-metadata 4.3.1
itsdangerous 2.0.1
Jinja2 3.0.1
jsonschema 3.2.0
Keras 2.3.1
Keras-Applications 1.0.8
keras-nightly 2.5.0.dev2021032900
-Keras-Preprocessing 1.1.2
+Keras-Preprocessing 1.1.0
kiwisolver 1.3.1
lxml 4.6.3
Markdown 3.3.4
@@ -45,15 +45,15 @@
oauthlib 3.1.0
ocr4all-pixel-classifier 0.6.5
ocr4all-pylib 0.2.6
-ocrd 2.24.0
+ocrd 2.23.3
ocrd-anybaseocr 1.6.0
ocrd-fork-ocropy 1.4.0a3
ocrd-fork-pylsd 0.0.4
-ocrd-modelfactory 2.24.0
-ocrd-models 2.24.0
+ocrd-modelfactory 2.23.3
+ocrd-models 2.23.3
ocrd-pc-segmentation 0.2.3
-ocrd-utils 2.24.0
-ocrd-validators 2.24.0
+ocrd-utils 2.23.3
+ocrd-validators 2.23.3
opencv-python-headless 4.5.2.52
opt-einsum 3.3.0
pandas 1.2.4
@@ -74,15 +74,15 @@
requests-oauthlib 1.3.0
rsa 4.7.2
scikit-image 0.18.1
-scipy 1.6.3
+scipy 1.4.1
setuptools 57.0.0
Shapely 1.7.1
six 1.15.0
-tensorboard 2.0.2
+tensorboard 2.1.1
tensorboard-data-server 0.6.1
tensorboard-plugin-wit 1.8.0
-tensorflow 2.0.4
-tensorflow-estimator 2.0.1
+tensorflow 2.1.3
+tensorflow-estimator 2.1.0
termcolor 1.1.0
tifffile 2021.4.8
toml 0.10.2
There are also differences in other virtual environments, but those could be caused by updates in PyPI:
--- ../venv-20210428/local/sub-venv/headless-torch14/headless-torch14.list 2021-05-29 08:47:20.813584012 +0200
+++ ../venv-20210428t/local/sub-venv/headless-torch14/headless-torch14.list 2021-05-29 08:47:12.641579834 +0200
@@ -13,7 +13,7 @@
Flask 2.0.1
idna 2.10
imageio 2.9.0
-importlib-metadata 4.3.0
+importlib-metadata 4.3.1
itsdangerous 2.0.1
Jinja2 3.0.1
jsonschema 3.2.0
-tensorboard 2.0.2
+tensorboard 2.1.1
-tensorflow 2.0.4
-tensorflow-estimator 2.0.1
+tensorflow 2.1.3
+tensorflow-estimator 2.1.0
Since this is supposed to be the venv with the 2.1.x
tensorflow, we had a broken release v2021-04-28, but the latest version is working correctly?
In any case, I do think we need better upgrade mechanism/documentation.
I had run the builds with the latest release and newly created virtual environments. Therefore I am afraid that it's not an upgrade issue.
I had run the builds with the latest release and newly created virtual environments. Therefore I am afraid that it's not an upgrade issue.
OK, so venv-20210428
is at https://github.com/OCR-D/ocrd_all/releases/tag/v2021-05-21? Then how did you derive the "good" version from the "bad" version? And how come core is behind in version in the sub-venv? How could this happen when doing a fresh install?
venv-20210428 and venv-20210428t have wrong names. venv-20210528 and venv-20210528t would have been correct. Both used v2021-05-21 plus an update for the tesseract submodule.
Then how did you derive the "good" version from the "bad" version? And how come core is behind in version in the sub-venv? How could this happen when doing a fresh install?
@stweil I want to reproduce this but I need to know how.
It looks like reproduction simply requires running make all
twice:
make all
make all
make check
I could confirm that running make all
twice with the same virtual environment breaks make check
:
make all # passes
make check # passes
make all # passes
make check # fails
What's the status here?
I can no longer reproduce this issue with the latest version. Therefore I close it now.
make check
aborts with these errors:make all
passes, but the build protocol shows lots of these conflicts: