Shivanandroy / simpleT5

simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models.
MIT License
386 stars 62 forks source link

Pytorch two devices error #55

Open mystsec opened 1 year ago

mystsec commented 1 year ago

When I run:

model = SimpleT5()
model.device = torch.device("cuda")
model.from_pretrained("t5","t5-large")
print(model.predict("summarize: "+text)[0])

I get the error:

Traceback (most recent call last):
  File "/home/user/MyApp/summarize.py", line 69, in <module>
    print(titlecase(model.predict("summarize: "+context)[0]))
  File "/home/user/.local/lib/python3.10/site-packages/simplet5/simplet5.py", line 464, in predict
    generated_ids = self.model.generate(
  File "/home/user/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/user/.local/lib/python3.10/site-packages/transformers/generation_utils.py", line 1088, in generate
    model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(
  File "/home/user/.local/lib/python3.10/site-packages/transformers/generation_utils.py", line 507, in _prepare_encoder_decoder_kwargs_for_generation
    model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs)
  File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/.local/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 912, in forward
    inputs_embeds = self.embed_tokens(input_ids)
  File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 162, in forward
    return F.embedding(
  File "/home/user/.local/lib/python3.10/site-packages/torch/nn/functional.py", line 2210, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)

How do I ensure that all tensors are on the gpu?

Vinitrajputt commented 1 year ago

When I run:

model = SimpleT5()
model.device = torch.device("cuda")
model.from_pretrained("t5","t5-large")
print(model.predict("summarize: "+text)[0])

I get the error:

Traceback (most recent call last):
  File "/home/user/MyApp/summarize.py", line 69, in <module>
    print(titlecase(model.predict("summarize: "+context)[0]))
  File "/home/user/.local/lib/python3.10/site-packages/simplet5/simplet5.py", line 464, in predict
    generated_ids = self.model.generate(
  File "/home/user/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/user/.local/lib/python3.10/site-packages/transformers/generation_utils.py", line 1088, in generate
    model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(
  File "/home/user/.local/lib/python3.10/site-packages/transformers/generation_utils.py", line 507, in _prepare_encoder_decoder_kwargs_for_generation
    model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs)
  File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/.local/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 912, in forward
    inputs_embeds = self.embed_tokens(input_ids)
  File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 162, in forward
    return F.embedding(
  File "/home/user/.local/lib/python3.10/site-packages/torch/nn/functional.py", line 2210, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)

How do I ensure that all tensors are on the gpu?

you can use this-

to load the model

from simplet5 import SimpleT5 model = SimpleT5() model.from_pretrained(model_type="t5", model_name="google/flan-t5-base")

to strat train it on gpu

model.train(train_df=train_df[:1000], eval_df=test_df[:100], source_max_token_len=128, target_max_token_len=64, batch_size=2, max_epochs=5,use_gpu=Trure)

hope this will help you