BinLiang-NLP / InterGCN-ABSA

[COLING 2020] Jointly Learning Aspect-Focused and Inter-Aspect Relations with Graph Convolutional Networks for Aspect Sentiment Analysis
https://www.aclweb.org/anthology/2020.coling-main.13/
53 stars 7 forks source link

Code for .raw files #7

Open karimmahalian opened 2 years ago

karimmahalian commented 2 years ago

Hi,

Thank u for sharing ur code with us. Could u please provide the code used for preparing .raw files?

Thank u in advance

BinLiang-NLP commented 2 years ago

Hi,

Thank u for sharing ur code with us. Could u please provide the code used for preparing .raw files?

Thank u in advance

Hi, Thanks for your question. I have added the code for preparing .raw files. Please see preprocess_data.py. Please let me know if there is any problem. Thanks!!! :-)

karimmahalian commented 2 years ago

Hi,

First, I really appreciate the time and effort u spend to help us run the code. I encountered an error when running this command on rest15 dataset:

!python train_bert.py --model_name intergcn_bert --dataset rest15 --num_epoch 20 --lr 2e-5 --batch_size 16 --seed 776

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [16, 85, 768]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

Any ideas plz?

Many thanks in advance

BinLiang-NLP commented 2 years ago

Hi,

First, I really appreciate the time and effort u spend to help us run the code. I encountered an error when running this command on rest15 dataset:

!python train_bert.py --model_name intergcn_bert --dataset rest15 --num_epoch 20 --lr 2e-5 --batch_size 16 --seed 776

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [16, 85, 768]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

Any ideas plz?

Many thanks in advance

Hi, I am very sorry for the error. Please confirm whether your PyTorch >= 1.0.0? Since I run the code with the Requirements in README, there is no problem. Please let me know if there is any problem. Thanks!!!