stanford-crfm / BioMedLM

590 stars 61 forks source link

demo.py's unexpected behavior #6

Open vriez opened 1 year ago

vriez commented 1 year ago

Because the model is too big for my machine, I get

RuntimeError: CUDA out of memory. Tried to allocate 100.00 MiB (GPU 0; 7.80 GiB total capacity; 7.19 GiB already allocated; 76.00 MiB free; 7.20 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

The first workaround that comes to mind is to use half precision

model = GPT2LMHeadModel.from_pretrained("stanford-crfm/pubmedgpt").half().to(device)

It runs, but the output is

The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:28895 for open-end generation.
Output:
----------------------------------------------------------------------------------------------------
Photosynthesis is \~10-fold lower in *gabaculine-treated* ***spmh7*** **plants in comparison to** ***spmh7*** **

Which looks odd.

What have I done wrong? How can I fix it?

My setting is:

OS: Ubuntu 20.04
GPU:  GeForce RTX 2070 Mobile - 8GiB
Python packages:
Package            Version     
------------------ ------------    
tokenizers         0.13.2      
torch              1.12.1+cu116
torchaudio         0.12.1+cu116
torchvision        0.13.1+cu116 
transformers       4.25.1      
Nardien commented 1 year ago

How about using int8 quantization or parallelformers library?

(Just a suggestion.. Note that I'm not a maintainer of this repo)

J38 commented 1 year ago

Have you tried making things bf16 ??

vriez commented 1 year ago

How about using int8 quantization or parallelformers library?

(Just a suggestion.. Note that I'm not a maintainer of this repo)

This approach yields:

Traceback (most recent call last):
  File "demo.py", line 9, in <module>
    model = GPT2LMHeadModel.from_pretrained("stanford-crfm/pubmedgpt").to(torch.int8).to(device)
  File "/home/vitor/Projects/pubmedgpt/venv_1/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1682, in to
    return super().to(*args, **kwargs)
  File "/home/vitor/Projects/pubmedgpt/venv_1/lib/python3.8/site-packages/torch/nn/modules/module.py", line 912, in to
    raise TypeError('nn.Module.to only accepts floating point or complex '
TypeError: nn.Module.to only accepts floating point or complex dtypes, but got desired dtype=torch.int8
vriez commented 1 year ago

Have you tried making things bf16 ??

While bfloat16 yields

Photosynthesis is \[*M*~0′~ = *I*~0′~ × *N*~0′~ × 0.5 × 255\] the light absorbed, \[*M

float16 yields

Photosynthesis is \~520,000-fold more efficient in C~4~ plants than in C~3~ plants because CO~2~ is first incorporated into a C~4~ acid (malate or aspartate) by phospho

Apparently, it outputs less gibberish. Is this behavior related to the warning message?

The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:28895 for open-end generation.