Closed kimdn closed 2 years ago
So I changed
dir_models = '/home/raulia/binders_nn/modules/models/'
to
dir_models = '/people/kimd999/ML-ACD/ab/code/official/iNNterfaceDesign/models/'
and ran again.
It ran further, then I saw
I'm more familiar with pytorch then tf, and I don't understand the error message.
I got the same error when I tried to run model with Tensorflow2.4 instead of 2.1. Did you check your version?
yes, 2.8
python -c 'import tensorflow as tf; print(tf.__version__)'
2022-03-24 14:13:27.357651: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /share/apps/cuda/9.2.148/lib:/share/apps/cuda/9.2.148/lib64:/share/apps/tmux/2.3/lib:/usr/lib64/:/share/apps/cuda/9.2.148/lib64/stubs:/share/apps/python/miniconda3.8/lib
2022-03-24 14:13:27.357705: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2.8.0
My tensorflow is 2.1.0. I failed to run it with 2.4, so 2.8 is causing a problem too, I guess
I'll rewrite a code according to new version of tf in future but is has not been done yet
Yes, I use python3.8 and https://www.tensorflow.org/install/pip says Python 3.8 support requires TensorFlow 2.2 or later.
only.
Indeed,
So, I will install tf 2.1 with python 3.7. Thanks.
Version of my python is 3.7.7
I don't see 3.7.7 at https://repo.anaconda.com/miniconda/
Since I have Python 3.7.11, hopefully that will be close enough.
At a new conda environment, I installed tf 2.1.0 and pyrosetta-2022.12+release.a4d7970-cp37-cp37m-linux_x86_64.whl with python 3.7.11.
run_this_py="../iNNterfaceDesign_scripts/1.preprocessing.py"
input_file="PepBB.input"
python $run_this_py $input_file
ran well.
However,
run_this_py="../iNNterfaceDesign_scripts/2.binders.py"
input_file="PepBB.input"
python $run_this_py $input_file
resulted in
P.S.
Various tf library issue messages are little concerning. However,
python -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
ran well with same/similar messages as
Therefore, tf library messages are probably fine..
Try to run an attached file with changed extension (.txt to .py) and directory with models. 2.binders.txt
I think that the only difference was
< model_sst = load_model(dir_models + 'SecS.hdf5')
< model_bb = load_model(dir_models + 'PepBB.hdf5')
---
> model_sst = load_model(dir_models + 'SecS.hdf5', custom_objects={'tf': tf, 'K': tf.keras.backend})
> model_bb = load_model(dir_models + 'PepBB.hdf5', custom_objects={'tf': tf, 'K': tf.keras.backend})
Unfortunately, I see the identical message.
Yes, the error looks similar.
I use
import h5py
print(h5py.__version__)
->
3.6.0
could you check solutions from: https://stackoverflow.com/questions/53740577/does-any-one-got-attributeerror-str-object-has-no-attribute-decode-whi about downgrading h5py Update: my h5py is v.2.10.0 indeed
ok, I did
pip uninstall h5py
pip install h5py==2.10.0
then, I see this message.
Unfortunately, this reproducing difficulty may repeat....as you can agree.
I assume that specification as
python v3.7 or higher;
PyRosetta;
Tensorflow v2.1.0.
is not enough.
Can you list all of your dependencies like I did at https://github.com/pnnl/AutoMicroED/blob/master/requirements.txt ?
Then, all I need to do is just
pip install -r <user path>/requirements.txt
that will automatically downgrade dependencies if needed.
This is a standard way of distributing a new program (of course, I used docker as well, but most of the time, this simple pip install requirement is win-win for most people).
I installed your script and before trying it I ran 2.binders.py again. And got your error exactly:
NotImplementedError: Cannot convert a symbolic Tensor (lstm/strided_slice:0) to a numpy array.
Your script changed my numpy 1.19.1 to NumPy==1.20.1. That's it. The error is a numpy version error now. Update: I solved the error by downgrading numpy back to 1.19.1
Yes, I did
pip install NumPy==1.19.1
Thank you,
Now I see
PepBB model is an archive divided into three pieces because of file size restrictions here, in github. My guess is that you did not unzip it properly or did not replace PepBB.zip with final unzipped PepBB.hdf5
That was correct, I extracted PepBB at my mac only since I couldn't extract it at my linux as I wrote.
Now, after I copied PepBB.hdf5 into my linux, I see this
you can comment the line about helix.txt. It does not required with default options. I'll add file within couple of minutes though. Update: the file is added into iNNterfaceDesign_scripts/modules/
With
commit 6d084d38eb3fa7965444cdc041128591cd7a4c96
Author: Raulia Syrlybaeva <Raulia.Syrlybaeva@uga.edu>
Date: Thu Mar 24 22:43:01 2022 -0400
Update transform_coords.py
I see
Rosetta community used to have unit/integration test server. I assume that they still use it.
You can take a look at https://github.com/features/actions
Thanks, I'll look into it. I attached the file with extra extension for now. I think that problem is with path 'modules/' helix.pdb.txt
Yes, can you email me (doonam.kim@pnnl.gov) once you fix hard-coded path issue (like this modules,,,,)?
Eva or other lab member may test for you???
I think that this is the last path that should be fixed. I'll check everything soon in another computer. Anyway, If you decided to stop here I completely understand because it was not tested by people other than me and it seems that the code has numerous issues which is exhausting. I appreciate your effort and grateful for the feedback. I'll let you know when I finish tests in another place and fix paths.
With the latest repo, now I could run 2.binders.py until I see 4 pdb files at run_iNNterfaceDesign/3ztj_ABCDEF/binders
So with 3ztj_ABCDEF folder that we generated by 1.preprocessing.py,
I ran
Then, I see
As I see 2.binders.py,
dir_models = '/home/raulia/binders_nn/modules/models/'
is hardcoded.Adding one more argument for python is easy to avoid this hardcoded variable... :)