cisnlp / simalign

Obtain Word Alignments using Pretrained Language Models (e.g., mBERT)
MIT License
347 stars 47 forks source link

Inputs not converted to cuda tensors when -device is cuda #16

Closed ishan00 closed 3 years ago

ishan00 commented 4 years ago

In the simalign.py file the inputs aren't converted to cuda tensors when the flat -device is set to cuda.

    def get_embed_list(self, sent_pair: List[List[str]]) -> torch.Tensor:
        if self.emb_model is not None:
            inputs = self.tokenizer(sent_pair, is_pretokenized=True, padding=True, truncation=True, return_tensors="pt")
            outputs = self.emb_model(**inputs)[2][self.layer]

            return outputs[:, 1:-1, :]
        else:
            return None

This can be fixed by adding the line inputs = inputs.to(self.device) before passing it to self.emb_model

masoudjs commented 3 years ago

I fixed this in the new commit. Thank you.