aws-neuron / transformers-neuronx

Apache License 2.0
88 stars 25 forks source link

gpt2_demo @ d3f6e49 (latest) breaks #27

Closed supersat closed 4 days ago

supersat commented 11 months ago

I usually install transformers_neuronx from git, and since the last commit says that it was updated for SDK release 2.12, I assumed it was the same version available from GitHub. However, running gpt2_demo with the GitHub version breaks:

(aws_neuron_venv_pytorch) ubuntu@ip-172-31-40-142:~$ gpt2_demo run gpt2-small
running GPT2ForSampling.from_pretrained
running model.to_neuron
....
Compiler status PASS
Traceback (most recent call last):
  File "/opt/aws_neuron_venv_pytorch/bin/gpt2_demo", line 8, in <module>
    sys.exit(main())
  File "/opt/aws_neuron_venv_pytorch/lib/python3.8/site-packages/transformers_neuronx/gpt2/demo.py", line 20, in main
    demo('gpt2', GPT2ForSampling, amp_callback)
  File "/opt/aws_neuron_venv_pytorch/lib/python3.8/site-packages/transformers_neuronx/gpt_demo.py", line 61, in demo
    run(args, model_name, model_cls)
  File "/opt/aws_neuron_venv_pytorch/lib/python3.8/site-packages/transformers_neuronx/gpt_demo.py", line 105, in run
    model.to_neuron()
  File "/opt/aws_neuron_venv_pytorch/lib/python3.8/site-packages/transformers_neuronx/gpt2/model.py", line 117, in to_neuron
    self.decoder_lm_head.to_neuron()
  File "/opt/aws_neuron_venv_pytorch/lib/python3.8/site-packages/transformers_neuronx/decoder.py", line 121, in to_neuron
    self.program.setup(self.layers, ln_lm_head_params)
  File "/opt/aws_neuron_venv_pytorch/lib/python3.8/site-packages/transformers_neuronx/decoder.py", line 872, in setup
    super().setup(layers, ln_lm_head_params)
  File "/opt/aws_neuron_venv_pytorch/lib/python3.8/site-packages/transformers_neuronx/decoder.py", line 827, in setup
    kernel.load()
  File "/opt/aws_neuron_venv_pytorch/lib/python3.8/site-packages/transformers_neuronx/compiler.py", line 376, in load
    self.model = torch.classes.neuron.ParallelModel(self.neff_bytes, self.tp_degree, self.g_start_device_id, self.g_device_count)
RuntimeError: __init__() expected at most 3 argument(s) but received 5 argument(s). Declaration: __init__(__torch__.torch.classes.neuron.ParallelModel _0, str _1, int _2) -> NoneType _0

I diffed the latest version of the wheel (https://pip.repos.neuron.amazonaws.com/transformers-neuronx/transformers_neuronx-0.5.58-py3-none-any.whl) against what's in git and it seems like git has many extra changes, so now I'm wondering if the pip wheel is outdated, or if they have diverged somehow.

aaroniscode commented 11 months ago

I'm hitting the same issue trying to run the: Generative AI contents through Amazon Trainium and Inferentia workshop.

Error happens at the section: Port the model to Neuron SDK and deploy it on Inferentia

(aws_neuron_venv_pytorch) [ec2-user@ip-172-31-20-49 opt-2.7b]$ python3 model_neuron.py
Downloading (…)lve/main/config.json: 100%|████████████████████████████████████████████████████| 691/691 [00:00<00:00, 274kB/s]
Downloading pytorch_model.bin: 100%|██████████████████████████████████████████████████████| 5.30G/5.30G [00:26<00:00, 198MB/s]
Downloading (…)neration_config.json: 100%|███████████████████████████████████████████████████| 137/137 [00:00<00:00, 61.2kB/s]
running model.to_neuron
.........
Compiler status PASS
Traceback (most recent call last):
  File "model_neuron.py", line 37, in <module>
    model.to_neuron()
  File "/opt/aws_neuron_venv_pytorch/lib64/python3.8/site-packages/transformers_neuronx/opt/model.py", line 96, in to_neuron
    self.decoder_lm_head.to_neuron()
  File "/opt/aws_neuron_venv_pytorch/lib64/python3.8/site-packages/transformers_neuronx/decoder.py", line 121, in to_neuron
    self.program.setup(self.layers, ln_lm_head_params)
  File "/opt/aws_neuron_venv_pytorch/lib64/python3.8/site-packages/transformers_neuronx/decoder.py", line 872, in setup
    super().setup(layers, ln_lm_head_params)
  File "/opt/aws_neuron_venv_pytorch/lib64/python3.8/site-packages/transformers_neuronx/decoder.py", line 827, in setup
    kernel.load()
  File "/opt/aws_neuron_venv_pytorch/lib64/python3.8/site-packages/transformers_neuronx/compiler.py", line 376, in load
    self.model = torch.classes.neuron.ParallelModel(self.neff_bytes, self.tp_degree, self.g_start_device_id, self.g_device_count)
RuntimeError: __init__() expected at most 3 argument(s) but received 5 argument(s). Declaration: __init__(__torch__.torch.classes.neuron.ParallelModel _0, str _1, int _2) -> NoneType _0

I'm using: Deep Learning AMI Neuron PyTorch 1.13 (Amazon Linux 2) 20230720

ashivadi commented 11 months ago

Looks like this might be the relevant commit: https://github.com/aws-neuron/transformers-neuronx/commit/001b5c2f6c077e3c03e9305e8b1c1d44468bd230

jeffhataws commented 11 months ago

Thanks @supersat @aaroniscode @ashivadi . This is a known issue and we are working on it. https://github.com/aws-neuron/aws-neuron-samples/issues/24

For now please make the following change in your notebook, to install the SDK release 2.12 transformers-neuronx pip wheel:

pip install transformers-neuronx transformers -U

Thanks!