Riccorl / transformer-srl

Reimplementation of a BERT based model (Shi et al, 2019), currently the state-of-the-art for English SRL. This model implements also predicate disambiguation.
69 stars 9 forks source link

How to run the model on GPU? #9

Closed Wangpeiyi9979 closed 3 years ago

Wangpeiyi9979 commented 3 years ago

Hi, thanks for your nice work. I have many sentences needed to be parsed, could you tell me how to parse the sentence with this model on GPU.

Riccorl commented 3 years ago

To run on gpu you should pass the id of the GPU (like allennlp) using cuda_device parameter in from_path method. Usually it's -1 for CPU, 0 for GPU. If you have multiple GPU it can be 0 to n.

from transformer_srl import dataset_readers, models, predictors

predictor = predictors.SrlTransformersPredictor.from_path(
    "path/to/srl_bert_base_conll2012.tar.gz, 
    "transformer_srl",
    cuda_device=0
)
predictor.predict(
  sentence="Did Uriah honestly think he could beat the game in under three hours?"
)
Wangpeiyi9979 commented 3 years ago

thanks!!!