Closed dacorvo closed 1 year ago
As mentioned in issue #10, I suspect this is related to the fact that the current transformers-neuronx optimized graphs only support the gelu_new
activation function used in GPT2, where the Pythia base models from EleutherAI are using gelu
. Can you confirm ?
If I am correct then I can create a better issue listing the GeLU flavors that need to be supported to be able to run the most popular models from the Hugging Face hub.
Hi @dacorvo , thanks for the note. I have confirmed the fix for your issue and it will be available in an upcoming release.
As mentioned in issue #10, I suspect this is related to the fact that the current transformers-neuronx optimized graphs only support the
gelu_new
activation function used in GPT2, where the Pythia base models from EleutherAI are usinggelu
. Can you confirm ?If I am correct then I can create a better issue listing the GeLU flavors that need to be supported to be able to run the most popular models from the Hugging Face hub.
This does not seem related to the GELU flavor. I switched to a locally generated wheel from the mainline branch instead of the 0.4.60
version (which I think comes from the r0.4
branch).
I cannot compile the model in AMP f32
on that branch, but if I use a f16
instead (which is actually the HF transformers model actual precision), the compilation works, and the outputs are correct.
Can you confirm this is fixed with latest release ?
Hi @dacorvo , I have confirmed that GPT-Neox Pythia is now working with release 2.12:
(aws_neuron_venv_pytorch) ubuntu@ip-10-0-10-149:~$ gptneox_demo --model_name EleutherAI/pythia-1.4B save ./pythia-1.4B; gptneox_demo --model_name EleutherAI/pythia-1.4B run --batch_size 1 --n_positions 20 ./pythia-1.4B
running GPTNeoXForSampling.from_pretrained
/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.8/site-packages/transformers_neuronx/gptneox/model.py:40: UserWarning: hidden_act="gelu" ignored in favor of hidden_act="gelu_new"
warnings.warn(f'hidden_act="{self.config.activation_function}" ignored in favor of hidden_act="gelu_new"')
running model.to_neuron
...
Compiler status PASS
running model.sample
generated_sequence= tensor([[12092, 13, 309, 1353, 247, 3448, 1566, 13, 309, 971,
368, 281, 1071, 479, 342, 247, 1071, 20689, 15, 309]])
["Hello, I'm a language model, I want you to test me with a test corpus. I"]
Packages:
(aws_neuron_venv_pytorch) ubuntu@ip-10-0-10-149:~$ pip list | grep neuron
aws-neuronx-runtime-discovery 2.9
libneuronxla 0.5.391
neuronx-cc 2.8.0.25+a3ad0f342
neuronx-distributed 0.1.0
neuronx-hwm 2.8.0.3+2b7c6da39
torch-neuronx 1.13.1.1.9.0
torch-xla 1.13.1+torchneuron8
transformers-neuronx 0.5.58
The GPT-Neox-20B model is too big to run on an inf2.8xlarge instance, so I tried to convert a model with the same architecture but less parameters: EleutherAI/pythia-1.4B.
I first saved the model locally:
Then I converted it and ran an inference, but the output is garbage:
For the record, the output with the standard CPU inference: