chiayewken / Span-ASTE

Code Implementation of "Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction".
MIT License
169 stars 45 forks source link

Using the notebook when there is no GPU #21

Closed xiaoqingwan closed 2 years ago

xiaoqingwan commented 2 years ago

Hello! Thank you for sharing this work! I was wondering how I can use the demo notebook locally when there is no GPU?

When running the cell under "# Use pretrained SpanModel weights for prediction, " I got this error:

2022-07-06 12:28:07,840 - INFO - allennlp.common.plugins - Plugin allennlp_models available Traceback (most recent call last): File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/bin/allennlp", line 8, in sys.exit(run()) File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/lib/python3.7/site-packages/allennlp/main.py", line 34, in run main(prog="allennlp") File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/lib/python3.7/site-packages/allennlp/commands/init.py", line 118, in main args.func(args) File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/lib/python3.7/site-packages/allennlp/commands/predict.py", line 205, in _predict predictor = _get_predictor(args) File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/lib/python3.7/site-packages/allennlp/commands/predict.py", line 105, in _get_predictor check_for_gpu(args.cuda_device) File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/lib/python3.7/site-packages/allennlp/common/checks.py", line 131, in check_for_gpu " 'trainer.cuda_device=-1' in the json config file." + torch_gpu_error allennlp.common.checks.ConfigurationError: Experiment specified a GPU but none is available; if you want to run on CPU use the override 'trainer.cuda_device=-1' in the json config file. module 'torch.cuda' has no attribute '_check_driver'

I changed cuda_device to -1 in the jsonnet files from your folder training_config. But still no luck.

chiayewken commented 2 years ago

Hi, sorry the CPU inference seems not supported yet, we will try to implement and test it in the future.

Jurys22 commented 2 years ago

I am only running it on CPU (windows). What I've done is:

hope it helps

dipanmoy commented 2 years ago

@chiayewken Thanks for opening the thread. I will check and get back to you. Kindly don't close this thread for next few days.

jasonhuynh83 commented 2 years ago

I am only running it on CPU (windows). What I've done is:

  • template.libsonnet: put the two mentions to cuda_device as cuda_device :: -1,
  • when calling the python script avoid calling also the argument for cuda

hope it helps

Unfortunately I am unable to get this to run on MacOS CPU, I've set the template.libsonnet cuda_deivce to -1, how do I avoid calling the argument for cuda in the python script?

Jurys22 commented 2 years ago

Following the main file that I copied here

I don't use this parameter --trainer__cuda_device "$DEVICE" \

_rm -rf model* mkdir -p models $PYTHON aste/main.py \ --names 14lap,14lap,14lap,14lap,14lap,14res,14res,14res,14res,14res,15res,15res,15res,15res,15res,16res,16res,16res,16res,16res \ --seeds 0,1,12,123,1234,0,1,12,123,1234,0,1,12,123,1234,0,1,12,123,1234 \ --trainercuda_device "$DEVICE" \ --trainer__num_epochs 10 \ --trainercheckpointernum_serialized_models_to_keep 1 \ --modelspan_extractor_type "endpoint" \ --modelmodulesrelationuse_single_pool False \ --modelrelation_head_type "proper" \ --modeluse_span_width_embeds True \ --modelmodulesrelationuse_distance_embeds True \ --modelmodulesrelationuse_pair_feature_multiply False \ --modelmodulesrelationuse_pair_feature_maxpool False \ --modelmodulesrelationuse_pair_feature_cls False \ --modelmodulesrelationuse_span_pair_aux_task False \ --modelmodulesrelationuse_span_loss_for_pruners False \ --model__loss_weightsner 1.0 \ --modelmodulesrelationspans_per_word 0.5 \ --modelmodulesrelationneg_class_w_eight -1