Closed ghost closed 3 years ago
Hi Gaby @GabyRumc ,
I am happy to provide some limited support now, but we are awaiting final publication of our manuscript before we provide ongoing full support. Thank you for pointing out some dependency issues. You are correct that there are redundant dependencies in the requirementst.txt file. As you mentioned tensorflow-gpu==2.0.0 can be used rather than the nightly preview version. Tensorflow-gpu 2.0.0 was used due to CUDA limitations on our local server. Accordingly, as you discovered, python version 3.7 must be used for compatibility and we have now indicated this clearly in the README file. We also removed the redundant and incorrect "skimage" dependency as this is already covered by scikit-image in the requirements file. Note the only major difference between the example requirements.txt file you provided and our updated file is the version for keras vis. Your file contains vis>=0.0.4, however, in order for gradCAM visualization to work correctly you must use the latest keras-vis version by installing directly from github (git+https://github.com/raghakot/keras-vis).
No adjustments to str.decode in the source code are necessary and the preferred file format when using this application is PNG format. Regarding the "IndexError: list index out of range" error in crop_img.py, this has to do with normalization of the path name, rather than a programming error (this is addressed in more detail in the corresponding issue you raised separately).
Thanks very much and let us know if this does not resolve your issue.
Thanks!
No adjustments to str.decode in the source code are necessary
was in the keras package - but no need if the packages are now updated
Note the only major difference between the example requirements.txt file you provided and our updated file is the version for keras vis
A file diff says differently ;-) but ok...
version for keras vis. Your file contains vis>=0.0.4
Yes, this is the vis package - latest 0,0,5, not keras-vis. In any case, PyPI only has v 0.4.1 (for keras-vis) so I will keep to pulling an unreleased version from github as you specify, even if that isn't a formally released version.
I found a workable set of packages based around tensorflow-gpu 2.2.0, So this version should also be fine, given what you said about tensoflow-gpu 2.0.0 - or not?
No adjustments to str.decode in the source code are necessary
Not in your source code, no. Using your updated requirements.txt
with tensorflow(-gpu)=2.0.0
I get the following for both a Python 3.6 and a Python 3.7 environment:
Using TensorFlow backend.
Loading models...
python-BaseException
Traceback (most recent call last):
File "[...]site-packages/tensorflow_core/python/keras/saving/hdf5_format.py", line 166, in load_model_from_hdf5
model_config = json.loads(model_config.decode('utf-8'))
AttributeError: 'str' object has no attribute 'decode'
The decode()
method on class str
was removed in Python 3 -> suggests build / package incompatibilities still present.
Bunch of other incompatibilites etc, won't list them all here.
Given there is no specific reason to stay with tensorflow 2.0.0
I'll stay with tensorflow(-gpu)==2.2.0
in requirements.txt
, which also has the nice side-effect of being able to run on my GPU which tf 2.0.0 cannot (my GPU drivers not compatible that far back for tf 2.0.x but tf 2.2.x runs fine on my GPU with nvidia drivers v450)
Staying with tensorflow 2.2.0 gives the following other incompatibilities with your requirements.txt:
tensorflow-gpu 2.2.0 requires h5py<2.11.0,>=2.10.0, but you'll have h5py 3.1.0 which is incompatible.
tensorflow-gpu 2.2.0 requires scipy==1.4.1; python_version >= "3", but you'll have scipy 1.5.4 which is incompatible.
tensorflow 2.2.0 requires h5py<2.11.0,>=2.10.0, but you'll have h5py 3.1.0 which is incompatible.
tensorflow 2.2.0 requires scipy==1.4.1; python_version >= "3", but you'll have scipy 1.5.4 which is incompatible.
Please note
requirements.txt
. For tf 2.0. this must also be the case - and maybe even earlier versions of some packages might be required for tf 2.0.0.h5py
nor scipy
were in your updated requirements.txt
, so I re-added them (they were in my version of requirements.txt) with the version specifiers as listed above and all now installs correctly.OK, I am going to try a fresh build of the environment from scratch and report back to you. As you know dependencies can be tricky especially when working on different workstations with different OS/NVIDIA drivers/CUDA version etc. Some of these issues seem to be OS/workstation independent, however (ie missing dependencies entirely in requirements file). I will get back to you in the next 48 hours
FYI my current requirements.txt which installs (pip install -r
) correctly with tensorflow(-gpu)-2.2.0 :
requirements.txt
OK, I am going to try a fresh build of the environment from scratch and report back to you. As you know dependencies can be tricky especially when working on different workstations with different OS/NVIDIA drivers/CUDA version etc. Some of these issues seem to be OS/workstation independent, however (ie missing dependencies entirely in requirements file). I will get back to you in the next 48 hours
I closed this for now as I have posted my requirements.txt which works on tf 2.2.0 for me. Feel free to re-open if you find other issues. Yep, packaging can be tricky... some of the issues I had were definitely tf / keras package compatibility issues - seems to be a lot on the internet about such packaging problems.
Hi @GabyRumc ,
I reopened to update you on my testing of pip install of the requirements file on different operating systems with different hardware specifications. Unfortunately, there are occasional issues depending on your setup/environment which are impossible to account for with a requirements.txt file alone. To make the process as easy as possible, we have therefore included another option of pulling a docker image from a DockerHub repository that should work. There are two versions: a "large" version which is an exact copy of our production environment for our platform, and a "small" version which I put together using only required dependencies which has been tested some but not as extensively as the large version. You can now find details on how to pull these docker images from DockerHub in the readme.
Please let me know if you are still having environment issues after this.
there are occasional issues depending on your setup/environment which are impossible to account for with a requirements.txt file alone
Agreed - requirements.txt is only half the story.
we have therefore included another option of pulling a docker image from a DockerHub repository that should work
Thanks.
docker pull rwehbe/deepcovidxr:large
- picked it up, will let you know how I get on.
Will close this one as using the docker now - but the docker needed some modifications (e.g. copying the Python source from this repo to it) to get the docker working - will do a separate issue/PR for this
Hi,
I tried to install / run the code but had quite a few issues, detailed below.
Summary: Could you please provide a clean working version suitable for a Python 3.x environment? (Attached is a modified requirements.txt which achieves this)
So here are the issues and the steps taken to resolve them:
=> I installed tensorflow-gpu 2.0.0 which is the closest official release date-wise after this nightly dev build
Again:
Next Error:
=>
scikit-image
was already inrequirements.txt
, so I just removedskimage
-> seems likeskimage
is discontinuedAgain:
Next Error:
By looking at the actual files available on PyPI for tensorflow-gpu I eventually realized this was because I was using Python 3.8 and the latest builds available of
tensorflow-gpu 2.0.0
are only up to and including Python 3.7.So I installed Python 3.7 instead and tried again. Package deps install success! I then tried to run
crop_img.py
on a couple of prepared PNG images. Next error:For Python 3.x, this is correct -> the
str.decode()
method was removed in Python 3.x and only present in Python 2.x. Thus the keras version is still 2.7 (probably from the github install?) and this is deep in keras and is used for loading the stored dictionary of model weights.I thus tried a re-install of everything this time with Python 2.7.19. First warning:
So I guess it might be a good idea to update to 3.x anyway?
Then I tried
which gave the following error:
I went back to Python 3.7 and re-installed all packages as before. Then I simply removed by hand the calls to
str.decode()
inkeras/saving.py
to see if I could get the program running, which it did, at least until the next error:The suffix ".png" was being used, so instead of allowing im.save() to work out which format based on filename, I explicitly requested PNG format:
which gave a little more detailed information:
It seems the PNG plugin is simply not available in the current package installs for the 2.x version of keras + accompanying pillow version.
So I modified the code to save bitmap (".bmp") instead of PNG, which got me a little further until the next error:
At which point I concluded this is just a programming error - I will raise it in a separate issue.
One last try: I cleaned the environment and kept to the package list but allowed a full clean install of all packages suitable for and consistent with a Python 3.7 environment. This installed fine and it ran through with none of the dependency errors above. It still errored on
IndexError: list index out of range
, which I assume is a straight programming error rather than any dependency package error.Attached are both a modified requirements.txt and a dump of pip_freeze.txt to show the full install environment. I attached them as tickets to this issue rather than submitting a pull request as I wasn't sure if (1) I am able to push a branch to your repo, and (2) you would actually want me to push a branch (non-master, of course).
Regards, Gaby