aws-neuron / transformers-neuronx

Apache License 2.0
94 stars 28 forks source link

Garbage output with GPT-Neox Pythia model #12

Closed dacorvo closed 1 year ago

dacorvo commented 1 year ago

The GPT-Neox-20B model is too big to run on an inf2.8xlarge instance, so I tried to convert a model with the same architecture but less parameters: EleutherAI/pythia-1.4B.

I first saved the model locally:

$ gptneox_demo --model_name EleutherAI/pythia-1.4B save ./pythia-1.4B

Then I converted it and ran an inference, but the output is garbage:

$ gptneox_demo --model_name EleutherAI/pythia-1.4B run --batch_size 1 --n_positions 20 ./pythia-1.4B
running GPTNeoXForSampling.from_pretrained
/home/ubuntu/.local/lib/python3.8/site-packages/transformers_neuronx/gptneox/model.py:40: UserWarning: hidden_act="gelu" ignored in favor of hidden_act="gelu_new"
  warnings.warn(f'hidden_act="{self.config.activation_function}" ignored in favor of hidden_act="gelu_new"')
running model.to_neuron
..Selecting 7380 allocations
0%   10   20   30   40   50   60   70   80   90   100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Analyzing dependencies of Block1
0%   10   20   30   40   50   60   70   80   90   100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Analyzing dependencies of Block1
0%   10   20   30   40   50   60   70   80   90   100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Dependency reduction of sg0000
0%   10   20   30   40   50   60   70   80   90   100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************

Compiler status PASS
2023-Jun-23 09:32:02.0148 1784:1784 [0] nccl_net_ofi_init:1415 CCOM WARN NET/OFI aws-ofi-nccl initialization failed
2023-Jun-23 09:32:02.0148 1784:1784 [0] init.cc:99 CCOM WARN OFI plugin initNet() failed is EFA enabled?
running model.sample
generated_sequence= tensor([[12092,    13,   309,  1353,   247,  3448,  1566,    13, 50276, 50276,
           521,  1028,   292, 35106,    11, 50276,    88, 50276,  7112,    11]])
["Hello, I'm a language model,     hisucetunto*  w  ido*"]

For the record, the output with the standard CPU inference:

tensor([[12092,    13,   309,  1353,   247,  3448,  1566,    13,   285,   309,
          1353,  2820,   281,  1973,   247,  1566,   326,   476,   320,   908]])
["Hello, I'm a language model, and I'm trying to build a model that can be used"]
dacorvo commented 1 year ago

As mentioned in issue #10, I suspect this is related to the fact that the current transformers-neuronx optimized graphs only support the gelu_new activation function used in GPT2, where the Pythia base models from EleutherAI are using gelu. Can you confirm ?

If I am correct then I can create a better issue listing the GeLU flavors that need to be supported to be able to run the most popular models from the Hugging Face hub.

jeffhataws commented 1 year ago

Hi @dacorvo , thanks for the note. I have confirmed the fix for your issue and it will be available in an upcoming release.

dacorvo commented 1 year ago

As mentioned in issue #10, I suspect this is related to the fact that the current transformers-neuronx optimized graphs only support the gelu_new activation function used in GPT2, where the Pythia base models from EleutherAI are using gelu. Can you confirm ?

If I am correct then I can create a better issue listing the GeLU flavors that need to be supported to be able to run the most popular models from the Hugging Face hub.

This does not seem related to the GELU flavor. I switched to a locally generated wheel from the mainline branch instead of the 0.4.60 version (which I think comes from the r0.4 branch).

I cannot compile the model in AMP f32 on that branch, but if I use a f16 instead (which is actually the HF transformers model actual precision), the compilation works, and the outputs are correct.

dacorvo commented 1 year ago

Can you confirm this is fixed with latest release ?

jeffhataws commented 1 year ago

Hi @dacorvo , I have confirmed that GPT-Neox Pythia is now working with release 2.12:

(aws_neuron_venv_pytorch) ubuntu@ip-10-0-10-149:~$ gptneox_demo --model_name EleutherAI/pythia-1.4B save ./pythia-1.4B; gptneox_demo --model_name EleutherAI/pythia-1.4B run --batch_size 1 --n_positions 20 ./pythia-1.4B
running GPTNeoXForSampling.from_pretrained
/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.8/site-packages/transformers_neuronx/gptneox/model.py:40: UserWarning: hidden_act="gelu" ignored in favor of hidden_act="gelu_new"
  warnings.warn(f'hidden_act="{self.config.activation_function}" ignored in favor of hidden_act="gelu_new"')
running model.to_neuron
...
Compiler status PASS
running model.sample
generated_sequence= tensor([[12092,    13,   309,  1353,   247,  3448,  1566,    13,   309,   971,
           368,   281,  1071,   479,   342,   247,  1071, 20689,    15,   309]])
["Hello, I'm a language model, I want you to test me with a test corpus. I"]

Packages:

(aws_neuron_venv_pytorch) ubuntu@ip-10-0-10-149:~$ pip list | grep neuron
aws-neuronx-runtime-discovery 2.9
libneuronxla                  0.5.391
neuronx-cc                    2.8.0.25+a3ad0f342
neuronx-distributed           0.1.0
neuronx-hwm                   2.8.0.3+2b7c6da39
torch-neuronx                 1.13.1.1.9.0
torch-xla                     1.13.1+torchneuron8
transformers-neuronx          0.5.58