Closed f-sKeese closed 1 year ago
Dear Sebastian,
The "run_decimer_save_results.py" is outdated and we are not using that anymore. That script was written for our internal use.
But if you wish to use it, please install DECIMER using PyPi. After that use the now updated "run_decimer_save_results.py" and it should work.
pip install decimer==2.2.0
from DECIMER import predict_SMILES
# Chemical depiction to SMILES translation
image_path = "path/to/imagefile"
SMILES = predict_SMILES(image_path)
print(SMILES)
Thank you very much that fixed it! Also you made me recognize that i should simply build a simple benchmark.py using the decimer package.. Thanks!
Hi everyone,
I am currently trying to run the "run_decimer_save_results.py" script to reproduce benchmarks. Trying this I am encouting errors in different files importing packages.
An Example: File "/data/anaconda3/envs/DECIMER/lib/python3.10/site-packages/DECIMER/Predictor_EfficientNet2.py", line 8, in
import Efficient_Net_encoder
ModuleNotFoundError: No module named 'Efficient_Net_encoder'
"import DECIMER.Efficient_Net_encoder" would fix it but this doen't work for example in the Efficient_Net_encoder.py file, if I put "DECIMER.efficientnetv2" for simply "efficinetnetv2" the import is functioning but later in the same script the function "fficientnetv2.effnetv2_model.EffNetV2Model(model_name=model_name)" is prompting an error that the module efficientnetv2 doesn' have a module effnetv2_model.
Also a question: in the Predictor_EfficientNet2.py script the tokenizer "tokenizer_Isomeric_SELFIES" and max length "max_length_Isomeric_SELFIES" are loaded. Are those just renamed or different objects than the two which are downloaded with the model? These are called "tokenizer_SMILES" and "max_length" respectively.
I encountered the same issue on two installation on a Windows machine as well as on a Unix machine. I followed the ReadMe instructions and created an conda environment and running the code inside the environment.
I hope someone can help me. Let me know if you need any other informations.
Cheers, Sebastian