kexinhuang12345 / DeepPurpose

A Deep Learning Toolkit for DTI, Drug Property, PPI, DDI, Protein Function Prediction (Bioinformatics)
https://doi.org/10.1093/bioinformatics/btaa1005
BSD 3-Clause "New" or "Revised" License
939 stars 269 forks source link

errors when I ran "MPNN_AAC_Kiba.ipynb" #74

Closed xuzhang5788 closed 3 years ago

xuzhang5788 commented 3 years ago

I got this error when I ran "MPNN_AAC_Kiba.ipynb"

RuntimeError: CUDA error: device-side assert triggered

It happened again when I ran "case-study-II-Virtual-Screening-for-BindingDB-IC50.ipynb"

kexinhuang12345 commented 3 years ago

This is from CUDA error.

"The error messages you get when running into this error may not be very descriptive. To make sure you get the complete and useful stack trace, have this at the very beginning of your code and run it before anything else:"

CUDA_LAUNCH_BLOCKING="1"

Can you do that and send us the more descriptive error message?

xuzhang5788 commented 3 years ago

Thabk you for your fast response. I restarted my computer and did CUDA_LAUNCH_BLOCKING="1" at the beginning, but still got error messages like the followings:

RuntimeError Traceback (most recent call last)

in 1 from DeepPurpose import oneliner 2 from DeepPurpose.dataset import * ----> 3 oneliner.virtual_screening(*load_IC50_1000_Samples()) ~/projects/DeepPurpose/DeepPurpose/oneliner.py in virtual_screening(target, X_repurpose, target_name, drug_names, train_drug, train_target, train_y, save_dir, pretrained_dir, finetune_epochs, finetune_LR, finetune_batch_size, convert_y, subsample_frac, pretrained, split, frac, agg, output_len) 259 os.mkdir(result_folder_path) 260 --> 261 y_pred = models.virtual_screening(X_repurpose, target, model, drug_names, target_name, convert_y = convert_y, result_folder = result_folder_path, verbose = False) 262 y_preds_models.append(y_pred) 263 print('Predictions from model ' + str(idx + 1) + ' with drug encoding ' + model_name[0] + ' and target encoding ' + model_name[1] + ' are done...') ~/projects/DeepPurpose/DeepPurpose/DTI.py in virtual_screening(X_repurpose, target, model, drug_names, target_names, result_folder, convert_y, output_num_max, verbose) 162 df_data = data_process_repurpose_virtual_screening(X_repurpose, target, \ 163 model.drug_encoding, model.target_encoding, 'virtual screening') --> 164 y_pred = model.predict(df_data) 165 166 if convert_y: ~/projects/DeepPurpose/DeepPurpose/DTI.py in predict(self, df_data) 530 generator = data.DataLoader(info, **params) 531 --> 532 score = self.test_(generator, self.model, repurposing_mode = True) 533 return score 534 ~/projects/DeepPurpose/DeepPurpose/DTI.py in test_(self, data_generator, model, repurposing_mode, test) 289 else: 290 v_p = v_p.float().to(self.device) --> 291 score = self.model(v_d, v_p) 292 if self.binary: 293 m = torch.nn.Sigmoid() ~/miniconda3/envs/DeepPurpose/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~/projects/DeepPurpose/DeepPurpose/DTI.py in forward(self, v_D, v_P) 45 def forward(self, v_D, v_P): 46 # each encoding ---> 47 v_D = self.model_drug(v_D) 48 v_P = self.model_protein(v_P) 49 # concatenate and classify ~/miniconda3/envs/DeepPurpose/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~/projects/DeepPurpose/DeepPurpose/encoders.py in forward(self, feature) 271 embeddings.append(embed.to(device)) 272 continue --> 273 sub_fatoms = fatoms[N_atoms:N_atoms+n_a,:].to(device) 274 sub_fbonds = fbonds[N_bonds:N_bonds+n_b,:].to(device) 275 sub_agraph = agraph[N_atoms:N_atoms+n_a,:].to(device) RuntimeError: CUDA error: device-side assert triggered RuntimeError Traceback (most recent call last) in 1 from DeepPurpose import oneliner 2 from DeepPurpose.dataset import * ----> 3 oneliner.virtual_screening(*load_IC50_1000_Samples()) ~/projects/DeepPurpose/DeepPurpose/oneliner.py in virtual_screening(target, X_repurpose, target_name, drug_names, train_drug, train_target, train_y, save_dir, pretrained_dir, finetune_epochs, finetune_LR, finetune_batch_size, convert_y, subsample_frac, pretrained, split, frac, agg, output_len) 259 os.mkdir(result_folder_path) 260 --> 261 y_pred = models.virtual_screening(X_repurpose, target, model, drug_names, target_name, convert_y = convert_y, result_folder = result_folder_path, verbose = False) 262 y_preds_models.append(y_pred) 263 print('Predictions from model ' + str(idx + 1) + ' with drug encoding ' + model_name[0] + ' and target encoding ' + model_name[1] + ' are done...') ~/projects/DeepPurpose/DeepPurpose/DTI.py in virtual_screening(X_repurpose, target, model, drug_names, target_names, result_folder, convert_y, output_num_max, verbose) 162 df_data = data_process_repurpose_virtual_screening(X_repurpose, target, \ 163 model.drug_encoding, model.target_encoding, 'virtual screening') --> 164 y_pred = model.predict(df_data) 165 166 if convert_y: ~/projects/DeepPurpose/DeepPurpose/DTI.py in predict(self, df_data) 530 generator = data.DataLoader(info, **params) 531 --> 532 score = self.test_(generator, self.model, repurposing_mode = True) 533 return score 534 ~/projects/DeepPurpose/DeepPurpose/DTI.py in test_(self, data_generator, model, repurposing_mode, test) 289 else: 290 v_p = v_p.float().to(self.device) --> 291 score = self.model(v_d, v_p) 292 if self.binary: 293 m = torch.nn.Sigmoid() ~/miniconda3/envs/DeepPurpose/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~/projects/DeepPurpose/DeepPurpose/DTI.py in forward(self, v_D, v_P) 45 def forward(self, v_D, v_P): 46 # each encoding ---> 47 v_D = self.model_drug(v_D) 48 v_P = self.model_protein(v_P) 49 # concatenate and classify ~/miniconda3/envs/DeepPurpose/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~/projects/DeepPurpose/DeepPurpose/encoders.py in forward(self, feature) 271 embeddings.append(embed.to(device)) 272 continue --> 273 sub_fatoms = fatoms[N_atoms:N_atoms+n_a,:].to(device) 274 sub_fbonds = fbonds[N_bonds:N_bonds+n_b,:].to(device) 275 sub_agraph = agraph[N_atoms:N_atoms+n_a,:].to(device) RuntimeError: CUDA error: device-side assert triggered
kexinhuang12345 commented 3 years ago

that's weird, could you share with us your script?

xuzhang5788 commented 3 years ago

I just ran "case-study-II-Virtual-Screening-for-BindingDB-IC50.ipynb" cell by cell. I add one cell at the beginning with CUDA_LAUNCH_BLOCKING="1", ran this cell first, then followed the other cells

kexinhuang12345 commented 3 years ago

In jupyter notebook, you need to use os.environ['CUDA_LAUNCH_BLOCKING'] = '1'

xuzhang5788 commented 3 years ago

It still didn't work with my previews virtual environment. I recreated a new virtual env based on your instruction and modified utils.py, run python setup.py install, now it works.

In addition, when I followed your installation instructions to install DeepPurpose with pip, I suggest you adding one line for install jupyter notebook using conda install -c conda-forge notebook. Otherwise, jupyter notebook can not find torch even though torch is installed.

Refer to the new env. I got warnings like: WARNING:root:No normalization for BCUT2D_MWHI WARNING:root:No normalization for BCUT2D_MWLOW WARNING:root:No normalization for BCUT2D_CHGHI WARNING:root:No normalization for BCUT2D_CHGLO WARNING:root:No normalization for BCUT2D_LOGPHI WARNING:root:No normalization for BCUT2D_LOGPLOW WARNING:root:No normalization for BCUT2D_MRHI WARNING:root:No normalization for BCUT2D_MRLOW

I am not sure if it is critical.

Many thanks

futianfan commented 3 years ago

thanks for your suggestion!