A deep learning system for demographic inference (gender, age, and individual/person) that was trained on massive Twitter dataset using profile images, screen names, names, and biographies
Hi,
I've been enjoying this project a lot for my research, but recently I'm having issues using it in our machine that has Pytorch 1.8.0 installed on it. The error happens when I try to use any of the available models with GPU:
from m3inference import M3Inference import pprint m3 = M3Inference() # see docstring for details pred = m3.infer('./test/data_resized.jsonl') # also see docstring for details pprint.pprint(pred)
where it produces the following error:
pred = m3.infer('./test/data_resized.jsonl') # also see docstring for details
03/19/2021 12:13:13 - INFO - m3inference.dataset - 7 data entries loaded.
Predicting...: 0%| | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
File "", line 1, in
File "/home/minje/libraries/m3inference/m3inference/m3inference.py", line 127, in infer
pred = self.model(batch)
File "/opt/anaconda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/minje/libraries/m3inference/m3inference/full_model.py", line 99, in forward
username_pack, username_unsort = pack_wrapper(username_embed, username_len)
File "/home/minje/libraries/m3inference/m3inference/utils.py", line 47, in pack_wrapper
packed = pack_padded_sequence(sents_sorted, lengths_sorted, batch_first=True)
File "/opt/anaconda/lib/python3.8/site-packages/torch/nn/utils/rnn.py", line 245, in pack_padded_sequence
_VF._pack_padded_sequence(input, lengths, batch_first)
RuntimeError: 'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor
I think this is related to Pytorch's update on the pack_padded_sequence only accepting lengths as CPU form when inputted as tensors [link]. I would appreciate it a lot if you could look into this. Thanks!
Hi, I've been enjoying this project a lot for my research, but recently I'm having issues using it in our machine that has Pytorch 1.8.0 installed on it. The error happens when I try to use any of the available models with GPU:
from m3inference import M3Inference import pprint m3 = M3Inference() # see docstring for details pred = m3.infer('./test/data_resized.jsonl') # also see docstring for details pprint.pprint(pred)
where it produces the following error:
I think this is related to Pytorch's update on the pack_padded_sequence only accepting lengths as CPU form when inputted as tensors [link]. I would appreciate it a lot if you could look into this. Thanks!