Sentdex / ChatGPT-at-Home

ChatGPT @ Home: Large Language Model (LLM) chatbot application, written by ChatGPT
MIT License
316 stars 93 forks source link

"LayerNormKernelImpl" not implemented for 'Half' #1

Open mika-data opened 1 year ago

mika-data commented 1 year ago

Can you please tell us the module versions you have used to make this app run? Mine are 2023-01-25 19:12:05 (rev 2) ca-certificates {2022.10.11 (defaults/win-64) -> 2023.01.10 (defaults/win-64)} certifi {2022.9.24 (defaults/win-64) -> 2022.12.7 (defaults/win-64)} +brotlipy-0.7.0 (defaults/win-64) +cffi-1.15.1 (defaults/win-64) +charset-normalizer-2.0.4 (defaults/noarch) +colorama-0.4.6 (defaults/win-64) +cryptography-38.0.4 (defaults/win-64) +filelock-3.9.0 (defaults/win-64) +flit-core-3.6.0 (defaults/noarch) +future-0.18.2 (defaults/win-64) +huggingface_hub-0.10.1 (defaults/win-64) +idna-3.4 (defaults/win-64) +libuv-1.40.0 (defaults/win-64) +ninja-1.10.2 (defaults/win-64) +ninja-base-1.10.2 (defaults/win-64) +packaging-22.0 (defaults/win-64) +pycparser-2.21 (defaults/noarch) +pyopenssl-22.0.0 (defaults/noarch) +pysocks-1.7.1 (defaults/win-64) +pytorch-1.12.1 (defaults/win-64) +pyyaml-6.0 (defaults/win-64) +regex-2022.7.9 (defaults/win-64) +requests-2.28.1 (defaults/win-64) +tokenizers-0.11.4 (defaults/win-64) +tqdm-4.64.1 (defaults/win-64) +transformers-4.24.0 (defaults/win-64) +typing-extensions-4.4.0 (defaults/win-64) +typing_extensions-4.4.0 (defaults/win-64) +urllib3-1.26.14 (defaults/win-64) +win_inet_pton-1.1.0 (defaults/win-64) +yaml-0.2.5 (defaults/win-64)

2023-01-25 19:14:44 (rev 3) +click-8.0.4 (defaults/win-64) +flask-2.2.2 (defaults/win-64) +itsdangerous-2.0.1 (defaults/noarch) +jinja2-3.1.2 (defaults/win-64) +markupsafe-2.1.1 (defaults/win-64) +werkzeug-2.2.2 (defaults/win-64)

\Anaconda3\envs\aiml\lib\site-packages\torch\nn\functional.py", line 2503, in layer_norm return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled) RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' 127.0.0.1 - - [25/Jan/2023 19:16:13] "POST / HTTP/1.1" 500 -

I guess I am using pytorch and you are using torch, right?

HaroldPetersInskipp commented 1 year ago

I seem to be running into a similar problem, here is the console output:

10.0.0.114 - - [25/Jan/2023 11:35:51] "GET / HTTP/1.1" 200 - [2023-01-25 11:36:00,931] ERROR in app: Exception on / [POST] Traceback (most recent call last): File "C:\Python39\lib\site-packages\flask\app.py", line 2525, in wsgi_app response = self.full_dispatch_request() File "C:\Python39\lib\site-packages\flask\app.py", line 1822, in full_dispatch_request rv = self.handle_user_exception(e) File "C:\Python39\lib\site-packages\flask\app.py", line 1820, in full_dispatch_request rv = self.dispatch_request() File "C:\Python39\lib\site-packages\flask\app.py", line 1796, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(view_args) File "C:\Users\Banned\Downloads\Cryo\ChatGPT-at-Home-main\app.py", line 38, in index response_text = chatbot_response(input_text, history) File "C:\Users\Banned\Downloads\Cryo\ChatGPT-at-Home-main\app.py", line 24, in chatbot_response response_text = generator(input_text, max_length=1024, num_beams=1, num_return_sequences=1)[0]['generated_text'] File "C:\Python39\lib\site-packages\transformers\pipelines\text_generation.py", line 210, in call return super().call(text_inputs, kwargs) File "C:\Python39\lib\site-packages\transformers\pipelines\base.py", line 1084, in call return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File "C:\Python39\lib\site-packages\transformers\pipelines\base.py", line 1091, in run_single model_outputs = self.forward(model_inputs, forward_params) File "C:\Python39\lib\site-packages\transformers\pipelines\base.py", line 992, in forward model_outputs = self._forward(model_inputs, forward_params) File "C:\Python39\lib\site-packages\transformers\pipelines\text_generation.py", line 252, in _forward generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, generate_kwargs) File "C:\Python39\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, *kwargs) File "C:\Python39\lib\site-packages\transformers\generation\utils.py", line 1437, in generate return self.sample( File "C:\Python39\lib\site-packages\transformers\generation\utils.py", line 2443, in sample outputs = self( File "C:\Python39\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(input, kwargs) File "C:\Python39\lib\site-packages\transformers\models\opt\modeling_opt.py", line 932, in forward outputs = self.model.decoder( File "C:\Python39\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, kwargs) File "C:\Python39\lib\site-packages\transformers\models\opt\modeling_opt.py", line 697, in forward layer_outputs = decoder_layer( File "C:\Python39\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, *kwargs) File "C:\Python39\lib\site-packages\transformers\models\opt\modeling_opt.py", line 323, in forward hidden_states = self.self_attn_layer_norm(hidden_states) File "C:\Python39\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(input, kwargs) File "C:\Python39\lib\site-packages\torch\nn\modules\normalization.py", line 170, in forward return F.layer_norm( File "C:\Python39\lib\site-packages\torch\nn\functional.py", line 2205, in layer_norm return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled) RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' 10.0.0.114 - - [25/Jan/2023 11:36:00] "POST / HTTP/1.1" 500 -

HaroldPetersInskipp commented 1 year ago

The error seems to be occurring in the chatbot_response function on line 24, when the generator function is being called. The error message seems to indicate that an exception is being raised by the generator function, which is likely caused by an issue with the input being passed to the function or with the model configuration.

It may be necessary to check the input_text and history variable to see if they are the expected values and also checking the configuration of the model.

It's also possible that the error is caused by a bug in the version of the Transformers library.

HaroldPetersInskipp commented 1 year ago

It looks like changing line 15 in "app.py" to: generator = pipeline('text-generation', model=f"{MODEL_NAME}", do_sample=True, torch_dtype=torch.float32) Fixed the issue for me.

Naugustogi commented 1 year ago

removing, torch_dtype=torch.half also helps

yukiarimo commented 1 year ago

So this issue looks to be closed :)