wengong-jin / hgraph2graph

Hierarchical Generation of Molecular Graphs using Structural Motifs
MIT License
379 stars 110 forks source link

Key error during preprocesssing #18

Open WillButAgain opened 3 years ago

WillButAgain commented 3 years ago

After generating the vocab via the example command given in the generation directory, I ran the preprocessing.py example command on the same dataset, and every 50 batches or so a key error occurs. Example -

image

If i manually add this to the vocab.txt file it references, it continues on until the next key error caused by the absence of a different fragment. If i keep manually adding the missing keys, it eventually works after a dozen or so, but this is a fairly annoying process. I use the same --min_frequency as the repository, and noticed that if i reduce it, the vocabulary increases on the order of thousands of fragments - yet, the preprocessing step ends up working if i manually add in just a few dozen from the key error messages. Is there something I am doing wrong here?

byooooo commented 3 years ago

I also had this issue (see #20). It might help to try rerunning with the vocab and preprocessing using rdkit=2019.03.4.0

orubaba commented 2 years ago

Hey, where you able to solve the issue? I'm having similar issue now. kindly share your solution if any! image

marshallcase commented 2 years ago

I'm still getting the same issue too - getting the modules as close to the version Wengong was using as in issue #20 as follows:

rdkit                     2019.03.4.0      py37hc20afe1_1
python                    3.7.6                h0371630_2
pytorch                   1.12.0          py3.7_cuda11.6_cudnn8.3.2_0
numpy                     1.21.5           py37h6c91a56_3

still gives me the following issue:

multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
  File "/home/marcase/.conda/envs/hgraph-rdkit/lib/python3.7/multiprocessing/pool.py", line 121, in worker
    result = (True, func(*args, **kwds))
  File "/home/marcase/.conda/envs/hgraph-rdkit/lib/python3.7/multiprocessing/pool.py", line 44, in mapstar
    return list(map(*args))
  File "preprocess.py", line 19, in tensorize
    x = MolGraph.tensorize(mol_batch, vocab, common_atom_vocab)
  File "/home/marcase/hgraph2graph/hgraph/mol_graph.py", line 153, in tensorize
    tree_tensors, tree_batchG = MolGraph.tensorize_graph([x.mol_tree for x in mol_batch], vocab)
  File "/home/marcase/hgraph2graph/hgraph/mol_graph.py", line 194, in tensorize_graph
    fnode[v] = vocab[attr]
  File "/home/marcase/hgraph2graph/hgraph/vocab.py", line 43, in __getitem__
    return self.hmap[x[0]], self.vmap[x]
KeyError: 'C1=CNN=C1'
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "preprocess.py", line 106, in <module>
    all_data = pool.map(func, batches)
  File "/home/marcase/.conda/envs/hgraph-rdkit/lib/python3.7/multiprocessing/pool.py", line 268, in map
    return self._map_async(func, iterable, mapstar, chunksize).get()
  File "/home/marcase/.conda/envs/hgraph-rdkit/lib/python3.7/multiprocessing/pool.py", line 657, in get
    raise self._value
KeyError: 'C1=CNN=C1'
Bunnybeibei commented 1 year ago

I solved this problem by regenerating the vocab.txt (use get_vocab.py), because this bug seems to occur when the drug can't tensorize by the vocab list.

abcdvzz commented 9 months ago

I solved this problem by regenerating the vocab.txt (use get_vocab.py), because this bug seems to occur when the drug can't tensorize by the vocab list.

Can you elaborate more on how you solved it?

Bunnybeibei commented 9 months ago

I solved this problem by regenerating the vocab.txt (use get_vocab.py), because this bug seems to occur when the drug can't tensorize by the vocab list.

Can you elaborate more on how you solved it?

I mean, if we want to preprocess a list of SMILES, it's necessary to first run the get_vocab.py file to create a customized vocab.txt. Then, we can replace the given vocab.txt in the preprocess.py file with the one we have created. The author might have included differences in his version of vocab.txt, so we could regenerate the vocab.txt file to solve the issue.