Closed DhanshreeA closed 1 year ago
@miquelduranfrigola please take a look.
The SA (synthetic accessibility) score is on a scale of 1 to 10 (lower is better) while the SC (synthetic complexity) score is on a scale of 1 to 5 (again, lower is better). I am unsure how SCscore 2 compares to SAscore 2.
Hi @DhanshreeA
SC score uses the number of reactions needed to synthesize a new molecule as a measure of complexity, whereas SA score also takes into account the presence of unusual structural features. They are related (usually, a molecule with high SC score will also have high SA score and viceversa)
Do you think it would be useful to incorporate the models for SA and SC independently as well? SA score is already in the Hub (eos9ei3) but SC score I think is not there: https://pubs.acs.org/doi/10.1021/acs.jcim.7b00622
@GemmaTuron I see. Thanks for that explanation, it makes a lot of sense. Incorporating SAscore 2 and SCscore 2 independently is going to be very straightforward since the author has provided different fragment replacement databases for each of them. Only difference is that SA2 is 3GB whiile SC2 is 721MB but I understand that git LFS handles that.
Hi @DhanshreeA !
I am unsure about what SA2 is and what is the difference between the SA Score from the original publication, which is already in the Hub? Unless there is a siginificant improvement we are good with the one we have
Hi @GemmaTuron, I'll quote from their documentation:
replacements02_sa2.db.gz - database created from ChEMBL v22 structures which contain only organic atoms (C,N,O,S,P,F,Cl,Br,I) and have maximum synthetic accessibility score (SAScore) 2 replacements02_sc2.db.gz - database created from ChEMBL v22 structures which contain only organic atoms (C,N,O,S,P,F,Cl,Br,I) and have maximum synthetic complexity score (SCScore) 2
The model we have in the hub computes SA score for a given molecule input, while this framework uses fragments of this score for generating new molecules. In fact the author only uses the SA score as defined in the original publication https://jcheminf.biomedcentral.com/articles/10.1186/1758-2946-1-8
A few more details about this package relevant to incorporating it within the hub. The library API provides three functions for generating mutations in a given input molecule:
grow_mol
which replaces hydrogen atoms with fragments from the database.
Input:
mol: RDKit Mol object
Returns:
Generator over SMILES of new molecules
link_mols
which replaces hydrogen atoms with fragments from the database.
Input:
mol1: RDKit Mol object
mol2: RDKit Mol object
Returns:
Generator over SMILES of new molecules
mutate_mol
which replaces hydrogen atoms with fragments from the database.
Input:
mol: RDKit Mol object
Returns:
Generator over SMILES of new molecules
Other configurable parameters that can affect the speed of execution include, from the documentation and as addressed in the original repo :
max_replacements
– maximum number of replacements to make. If the number of replacements available in DB is greater than the specified value the specified number of randomly chosen replacements will be applied. Default: None.min_freq
– minimum occurrence of fragments in DB for replacement. Default: 0.ncores
– number of cores. Default: 1.Nice initiative! Short comments:
I can share a SA2.5 database for ChEMBL, but it will be even larger (10-20Gb). To reduce the size of a database it is possible to remove some tables with context radius which you do not expect to use, for example a table with radius 5. Or remove fragments with the number of atoms greater than a chosen threshold, e.g. fragments with size more than 12 atom can be removed. Now, there are fragments with up to 20 atoms. These are SQLite databases and this is easy to do by yourself.
If you will have further questions regarding CReM do not hesitate to contact me.
Thanks @DrrDom this is fantastic feedback. Greatly appreciated. @DhanshreeA, for the deployed version in the Ersilia Model Hub, let's use a set of reasonable parameters and relatively small database size, just to avoid frustration from users. But let's make sure we encourage users to visit the actual CReM repo, which is well maintained and easy to install. We can reflect this nicely in the README file. What do you all think?
@DhanshreeA I've made a few edits to your initial comment, in particular to mention more explicitly CReM. Let's hear @GemmaTuron 's feedback and then it is good to go!
It is difficult to recommend certain settings because I do not know your use case.
From a general practical perspective we usually use databases restricted by SA of source molecules (SA2 or SA2.5). Context radius is set to 3 or 4. This is quite conservative generation resulting in a limited number of output molecule, but with higher chances of their synthetic accessibility. The fragment size depends on the task but rarely larger than 12 (to cover fused bicycles with small substituents). To avoid an unreasonably large number of generated structures one may set max_replacement
to a desired value. This will guarantee to be the upper bound of the number of generated compounds and this makes generation time more predictable.
We have a web service for simple use cases - https://crem.imtm.cz. It may help to better understand time constrains.
/approve
@DhanshreeA ersilia model respository has been successfully created and is available at:
Now that your new model respository has been created, you are ready to start contributing to it!
Here are some brief starter steps for contributing to your new model repository:
Note: Many of the bullet points below will have extra links if this is your first time contributing to a GitHub repository
README.md
file to accurately describe your modelIf you have any questions, please feel free to open an issue and get support from the community!
Associated PR: https://github.com/ersilia-os/eos4q1a/pull/1
Hi @DhanshreeA I have added the model to the AirTable and have merged the PR
Note: For sake of simplicity we are not including the link_mols
endpoint at this time.
Further action items:
grow_molecule
API and concatenate the results, ensuring that there are no duplicatesUpdate (Dec 21):
Updates:
Further action items: Since the number of output molecules for different inputs can differ so wildly, we are capping total output molecules at 100. However to sample these output molecules for diversity, we will generate 100 clusters and return a molecule closest to the cluster centroid from each cluster.
@miquelduranfrigola / @GemmaTuron could one of you merge https://github.com/ersilia-os/eos4q1a/pull/4?
Updates:
Associated PR: https://github.com/ersilia-os/eos4q1a/pull/5 @miquelduranfrigola @GemmaTuron
Results on eml_canonical: output.csv
To understand fingerprint generation and vectorization into Numpy arrays for the downstream task of KMeans clustering I took inspiration from https://github.com/PatWalters/kmeans/blob/master/kmeans.py (particularly for using the inbuilt (rdkit.DataStructs.ConverToNumpyArray
) and https://github.com/ersilia-os/lazy-qsar/blob/main/lazyqsar/descriptors/descriptors.py#L328
In understanding RDKit documentation I came across a similar functionality that RDKit offers in terms of picking a subset of diverse molecules https://www.rdkit.org/docs/GettingStartedInPython.html#picking-diverse-molecules-using-fingerprints. I have not tried using so I am unsure if it is better optimized for the task at hand than using regular machine learning libraries like scikit learn, which I have used right now.
Updates with integration testing with ersilia
CLI:
generate
API within the model is constantly running into an error:
Traceback (most recent call last):
File "/home/dee/miniconda3/envs/ersilia-env/bin/ersilia", line 33, in <module>
sys.exit(load_entry_point('ersilia', 'console_scripts', 'ersilia')())
File "/home/dee/miniconda3/envs/ersilia-env/lib/python3.8/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "/home/dee/miniconda3/envs/ersilia-env/lib/python3.8/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/home/dee/miniconda3/envs/ersilia-env/lib/python3.8/site-packages/click/core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/dee/miniconda3/envs/ersilia-env/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/dee/miniconda3/envs/ersilia-env/lib/python3.8/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/home/dee/miniconda3/envs/ersilia-env/lib/python3.8/site-packages/bentoml/cli/click_utils.py", line 138, in wrapper
return func(*args, **kwargs)
File "/home/dee/miniconda3/envs/ersilia-env/lib/python3.8/site-packages/bentoml/cli/click_utils.py", line 115, in wrapper
return_value = func(*args, **kwargs)
File "/home/dee/miniconda3/envs/ersilia-env/lib/python3.8/site-packages/bentoml/cli/click_utils.py", line 99, in wrapper
return func(*args, **kwargs)
File "/home/dee/learn_xyz/outreachy/ersilia-project/ersilia/ersilia/cli/commands/api.py", line 36, in api
result = mdl.api(
File "/home/dee/learn_xyz/outreachy/ersilia-project/ersilia/ersilia/core/model.py", line 334, in api
return self.api_task(
File "/home/dee/learn_xyz/outreachy/ersilia-project/ersilia/ersilia/core/model.py", line 349, in api_task
for r in result:
File "/home/dee/learn_xyz/outreachy/ersilia-project/ersilia/ersilia/core/model.py", line 178, in _api_runner_iter
for result in api.post(input=input, output=output, batch_size=batch_size):
File "/home/dee/learn_xyz/outreachy/ersilia-project/ersilia/ersilia/serve/api.py", line 329, in post
self.output_adapter.adapt(
File "/home/dee/learn_xyz/outreachy/ersilia-project/ersilia/ersilia/io/output.py", line 283, in adapt
df = self._to_dataframe(result)
File "/home/dee/learn_xyz/outreachy/ersilia-project/ersilia/ersilia/io/output.py", line 227, in _to_dataframe
dtypes = [self.__pure_dtype(k) for k in output_keys]
File "/home/dee/learn_xyz/outreachy/ersilia-project/ersilia/ersilia/io/output.py", line 227, in <listcomp>
dtypes = [self.__pure_dtype(k) for k in output_keys]
File "/home/dee/learn_xyz/outreachy/ersilia-project/ersilia/ersilia/io/output.py", line 156, in __pure_dtype
t = self._schema[k]["type"]
KeyError: 'outcome'
Generating an example file as suggested here and using that I still run into the same error.
In understanding RDKit documentation I came across a similar functionality that RDKit offers in terms of picking a subset of diverse molecules https://www.rdkit.org/docs/GettingStartedInPython.html#picking-diverse-molecules-using-fingerprints. I have not tried using so I am unsure if it is better optimized for the task at hand than using regular machine learning libraries like scikit learn, which I have used right now.
Thanks @DhanshreeA. I think both options should be fine. If the scikit-learn option is already in place, then let's use that one.
As for the error above, as discussed, let's make sure that the output file has a fixed number of columns (100). Comma-separated is fine. We may or may not include a header.
Thanks @miquelduranfrigola I implemented this fix and the API works with the CLI now. Here is an output CSV file generated with five inputs. eos4q1a_output_2.csv
This PR is now good to be merged: https://github.com/ersilia-os/eos4q1a/pull/5 @GemmaTuron @miquelduranfrigola After it is merged, the model is ready to be tested by anyone in the team.
Hi @DhanshreeA! I've tried the model in this GitHub workflow
It failed, unfortunately. Aparently, it is unable to find crem
.
Did you try to fetch the model from your computer and check what is inside the eos4q1a
conda environment?
Hi @DhanshreeA! I've tried the model in this GitHub workflow
It failed, unfortunately. Aparently, it is unable to find
crem
.Did you try to fetch the model from your computer and check what is inside the
eos4q1a
conda environment?
Hi @miquelduranfrigola this is fixed by pushing the crem replacement database onto the LFS server. I re ran the build action in the GitHub workflow. and can confirm that the model is fetched successfully. It was an issue with Git LFS being unable to find the crem replacement database on the LFS server.
Hi @DhanshreeA,
Update on test
I tested the model multiple times but it failed to fetch on both the CLI and colad. Error returned on both is
EmptyOutputError
link to colab CLI log 1 CLI log 2
Test Environment
* Conda 4.12.0 * Python 3.7.13 * Windows 10 WSL * Ubuntu 20.04
Thank you for your help @pauline-banye. The model works in Colab now.
One final TODO (bonus):
Fantastic work @DhanshreeA ! I've marked the model as Ready in the AirTable. Closing issue!
Before closing this issue, let's make sure the model is tested. I am assigning this to @Femme-js , @carcablop and @pauline-banye ! Please comment once you can confirm it works on your system and in google colab (using the newest template we edited)
Hi @DhanshreeA The model fetch successfully in CLI and Google Colab. Fetch in google colab: Fetch in CLI: log_fetch_eos4q1a.txt
To predict it takes a long time, I tried it with a single molecule.
We are doing the testing in https://github.com/ersilia-os/eos4q1a/issues/7 I'll close this issue as completed
Model Name
CReM fragment based structure generation
Model Description
The framework is an open source implementation of fragment based generative approaches for exploring chemical space while ensuring chemical validity. This framework utilizes a database of known compounds to come up with interchangable fragments based on the context radius of an input molecule to generate new molecules.
Slug
crem-structure-generation
Tags
Generative
Publication
https://jcheminf.biomedcentral.com/articles/10.1186/s13321-020-00431-w
Code
License
BSD 3-clause License https://github.com/DrrDom/crem/blob/master/LICENSE.txt