isayevlab / Auto3D_pkg

Auto3D generates low-energy conformers from SMILES/SDF
MIT License
146 stars 32 forks source link

Optimization engine did not run and no 3D structure converged. #72

Open shalemeghana opened 6 months ago

shalemeghana commented 6 months ago

I am working on Auto3D on Linux based workstation to optimize and to generate the 3D structures. I have given a 500 compounds in .smi format. Here i have provided the parameter file which i have used in my case. parameter.txt I have used this command to run Auto3D engine.

python3 auto3D.py parameters.yaml

After runing the above command it is showing this error.
RuntimeError: CUDA out of memory. Tried to allocate 4.27 GiB. GPU 1 has a total capacty of 15.74 GiB of which 11.88 MiB is free. Including non-PyTorch memory, this process has 15.71 GiB memory in use. Of the allocated memory 11.76 GiB is allocated by PyTorch, and 3.81 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF A clear and concise description of what the bug is.

And the complete output of the command is given below-

/home/sylab02/Auto3D_pkg/auto3D.py:166: SyntaxWarning: invalid escape sequence '\ ' """)

     _              _             _____   ____  
    / \     _   _  | |_    ___   |___ /  |  _ \ 
   / _ \   | | | | | __|  / _ \    |_ \  | | | |
  / ___ \  | |_| | | |_  | (_) |  ___) | | |_| |
 /_/   \_\  \__,_|  \__|  \___/  |____/  |____/  2.2.8
    // Automatic generation of the low-energy 3D structures

/home/sylab02/Auto3D_pkg/auto3D.py:166: SyntaxWarning: invalid escape sequence '\ ' """) /home/sylab02/Auto3D_pkg/auto3D.py:166: SyntaxWarning: invalid escape sequence '\ ' """) Checking input file... /home/sylab02/Auto3D_pkg/auto3D.py:166: SyntaxWarning: invalid escape sequence '\ ' """) There are 499 SMILES in the input file /home/sylab02/Auto3D_pkg/standard_smiles_2.smi. All SMILES and IDs are valid. Suggestions for choosing isomer_engine and optimizing_engine: Isomer engine options: RDKit and Omega. Optimizing engine options: AIMNET. The available memory is 60 GB. The task will be divided into 1 jobs. Job1, number of inputs: 499 /home/sylab02/Auto3D_pkg/auto3D.py:166: SyntaxWarning: invalid escape sequence '\ ' """) /home/sylab02/Auto3D_pkg/auto3D.py:166: SyntaxWarning: invalid escape sequence '\ ' """)

Isomer generation for job1 Enumerating cis/tran isomers for unspecified double bonds... Enumerating R/S isomers for unspecified atomic centers... Removing enantiomers... Stereo centers for 320 are not fully enumerated. Stereo centers for 382 are not fully enumerated. Enantiomers not removed for 54 Enumerating conformers/rotamers, removing duplicates... 100%|██████████████████████████████████| 13720/13720 [14:45:56<00:00, 3.87s/it]

Optimizing on job1 Preparing for parallel optimizing... (Max optimization steps: 10000) Total 3D conformers: 7502 0%| | 0/10000 [00:00<?, ?it/s] Process Process-5: Traceback (most recent call last): File "/home/sylab02/miniconda3/envs/auto3D/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/home/sylab02/miniconda3/envs/auto3D/lib/python3.12/multiprocessing/process.py", line 108, in run self._target(*self._args, self._kwargs) File "/home/sylab02/miniconda3/envs/auto3D/lib/python3.12/site-packages/Auto3D/auto3D.py", line 142, in optim_rank_wrapper optimizer.run() File "/home/sylab02/miniconda3/envs/auto3D/lib/python3.12/site-packages/Auto3D/batch_opt/batchopt.py", line 426, in run optdict = ensemble_opt(ani, coord_padded, numbers_padded, charges, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/sylab02/miniconda3/envs/auto3D/lib/python3.12/site-packages/Auto3D/batch_opt/batchopt.py", line 323, in ensemble_opt n_steps(state, param['opt_steps'], param['opttol'], param['patience']) File "/home/sylab02/miniconda3/envs/auto3D/lib/python3.12/site-packages/Auto3D/batch_opt/batchopt.py", line 244, in n_steps e, f = state['nn'].forward_batched(coord, numbers, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/sylab02/miniconda3/envs/auto3D/lib/python3.12/site-packages/Auto3D/batch_opt/batchopt.py", line 185, in forward_batched _e, _f = self(coord[batch], numbers[batch], charges[batch]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/sylab02/miniconda3/envs/auto3D/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/sylab02/miniconda3/envs/auto3D/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/sylab02/miniconda3/envs/auto3D/lib/python3.12/site-packages/Auto3D/batch_opt/batchopt.py", line 146, in forward d = self.ani( ^^^^^^^^^ File "/home/sylab02/miniconda3/envs/auto3D/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/sylab02/miniconda3/envs/auto3D/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: The following operation failed in the TorchScript interpreter. Traceback of TorchScript, serialized code (most recent call last): File "code/torch/ensemble/___torch_mangle_4.py", line 27, in forward else: pass _out = (_0).forward(_in, )


    _r = annotate(Dict[str, Tensor], {})
    _6 = torch.keys(_out)
  File "code/__torch__/aimnet/modules.py", line 20, in forward
    _1 = torch.requires_grad_(data[x])
    module = self.module
    data0 = (module).forward(data, )
             ~~~~~~~~~~~~~~~ <--- HERE
    multipass_module = self.multipass_module
    if multipass_module:
  File "code/__torch__/aimnet/models/aimnet2.py", line 30, in forward
    _out0 = (self)._zero_padded(data1, _out, )
    data2 = (self)._update_q(data1, _out0, False, )
    _3 = [(self)._prepare_in_a(data2, ), (self)._prepare_in_q(data2, )]
           ~~~~~~~~~~~~~~~~~~~ <--- HERE
    _in0 = torch.cat(_3, -1)
    _out1 = (_1).forward(_in0, )
  File "code/__torch__/aimnet/models/aimnet2.py", line 108, in _prepare_in_a
      a_i0 = a_i
    conv_a = self.conv_a
    avf_a = (conv_a).forward(a_j, data["gs"], data["gv"], )
             ~~~~~~~~~~~~~~~ <--- HERE
    return torch.cat([a_i0, avf_a], -1)
  def _prepare_in_q(self: __torch__.aimnet.models.aimnet2.AIMNet2,
  File "code/__torch__/aimnet/aev.py", line 103, in forward
      d2features0 = self.d2features
      if d2features0:
        avf_v0 = torch.einsum("...nmgd,...mag,agh->...nahd", [gv2, a, agh])
                 ~~~~~~~~~~~~ <--- HERE
        avf_v = avf_v0
      else:

Traceback of TorchScript, original code (most recent call last):
  File "/data/roman/AIMNet2Paper/models/ensemble.py", line 22, in forward
                if k in self.x:
                    _in[k] = data[k]
            _out = model(_in)
                   ~~~~~ <--- HERE
            _r = dict()
            for k in _out:
  File "/home/roman/repo/aimnet2/aimnet/modules.py", line 252, in forward
        torch.set_grad_enabled(True)
        data[self.x].requires_grad_(True)
        data = self.module(data)
               ~~~~~~~~~~~ <--- HERE
        if self.multipass_module:
            y = data[self.y][self.ipass]
  File "/home/roman/repo/aimnet2/aimnet/models/aimnet2.py", line 130, in forward
                _in = self._prepare_in_a(data)
            else:
                _in = torch.cat([self._prepare_in_a(data), self._prepare_in_q(data)], dim=-1)
                                 ~~~~~~~~~~~~~~~~~~ <--- HERE

            _out = mlp(_in)
  File "/home/roman/repo/aimnet2/aimnet/models/aimnet2.py", line 87, in _prepare_in_a
        if self.d2features:
            a_i = a_i.flatten(-2, -1)
        avf_a = self.conv_a(a_j, data['gs'], data['gv'])
                ~~~~~~~~~~~ <--- HERE
        _in = torch.cat([a_i, avf_a], dim=-1)
        return _in
  File "/home/roman/repo/aimnet2/aimnet/aev.py", line 131, in forward
            agh = self.agh
            if self.d2features:
                avf_v = torch.einsum('...nmgd,...mag,agh->...nahd', gv, a, agh)
                        ~~~~~~~~~~~~ <--- HERE
            else:
                avf_v = torch.einsum('...nmgd,...ma,agh->...nahd', gv, a, agh)
RuntimeError: CUDA out of memory. Tried to allocate 4.27 GiB. GPU 1 has a total capacty of 15.74 GiB of which 11.88 MiB is free. Including non-PyTorch memory, this process has 15.71 GiB memory in use. Of the allocated memory 11.76 GiB is allocated by PyTorch, and 3.81 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

The optimization engine did not run, or no 3D structure converged.
                 The reason might be one of the following: 
                 1. Allocated memory is not enough;
                 2. The input SMILES encodes invalid chemical structures;
                 3. Patience is too small
Process Process-3:

System information:
 - Operating System: Linux ubuntu20.04 64-bit.
- Auto3D version: 2.2.8
- Python version: 3.12.1
- RDKit version: 2023.09.5
- PyTorch version: 2.1.2 
LiuCMU commented 6 months ago

Hello, thanks for the detailed bug report. The bug is likely due to GPU 1 out of memory.

I would suggest keep the memory to the actual memory GPU1 has and set the capacity arguments to its default value 42. This will probably solve the issue.

Also, the value for max_confs was 1 in the original parameter file. Was there a specific reason for that? In this case, Auto3D will only generate 1 conformer for each molecule and optimize that one.