Closed logolemon closed 1 year ago
你好👋 ,上下文内容比较少。
复现步骤和大佬您一样,模型参数文件我下载下来了的,报错上下文如下,麻烦大佬帮忙看看:
C:\Users\Logo.conda\envs\PhotoToText\python.exe E:/01_Practice_materials/meta-llama/docker-llama2-chat/llama2-7b/app.py
Loading checkpoint shards: 100%|██████████| 2/2 [00:42<00:00, 21.34s/it]
C:\Users\Logo.conda\envs\PhotoToText\lib\site-packages\gradio\utils.py:833: UserWarning: Expected 7 arguments for function <function generate at 0x000001A24E0C6AF0>, received 6.
warnings.warn(
C:\Users\Logo.conda\envs\PhotoToText\lib\site-packages\gradio\utils.py:837: UserWarning: Expected at least 7 arguments for function <function generate at 0x000001A24E0C6AF0>, received 6.
warnings.warn(
Running on local URL: http://0.0.0.0:7860
C:\Users\Logo.conda\envs\PhotoToText\lib\site-packages\transformers\generation\utils.py:1259: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation)
warnings.warn(
Exception in thread Thread-6:
Traceback (most recent call last):
File "C:\Users\Logo.conda\envs\PhotoToText\lib\threading.py", line 980, in _bootstrap_inner
self.run()
File "C:\Users\Logo.conda\envs\PhotoToText\lib\threading.py", line 917, in run
self._target(*self._args, *self._kwargs)
File "C:\Users\Logo.conda\envs\PhotoToText\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
return func(args, kwargs)
File "C:\Users\Logo.conda\envs\PhotoToText\lib\site-packages\transformers\generation\utils.py", line 1271, in generate
self._validate_model_kwargs(model_kwargs.copy())
File "C:\Users\Logo.conda\envs\PhotoToText\lib\site-packages\transformers\generation\utils.py", line 1144, in _validate_model_kwargs
raise ValueError(
ValueError: The following model_kwargs
are not used by the model: ['token_type_ids'] (note: typos in the generate arguments will also show up in this list)
Traceback (most recent call last):
File "C:\Users\Logo.conda\envs\PhotoToText\lib\site-packages\gradio\routes.py", line 442, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\Logo.conda\envs\PhotoToText\lib\site-packages\gradio\blocks.py", line 1389, in process_api
result = await self.call_function(
File "C:\Users\Logo.conda\envs\PhotoToText\lib\site-packages\gradio\blocks.py", line 1108, in call_function
prediction = await utils.async_iteration(iterator)
File "C:\Users\Logo.conda\envs\PhotoToText\lib\site-packages\gradio\utils.py", line 346, in async_iteration
return await iterator.anext()
File "C:\Users\Logo.conda\envs\PhotoToText\lib\site-packages\gradio\utils.py", line 339, in anext
return await anyio.to_thread.run_sync(
File "C:\Users\Logo.conda\envs\PhotoToText\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\Logo.conda\envs\PhotoToText\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\Users\Logo.conda\envs\PhotoToText\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, args)
File "C:\Users\Logo.conda\envs\PhotoToText\lib\site-packages\gradio\utils.py", line 322, in run_sync_iterator_async
return next(iterator)
File "C:\Users\Logo.conda\envs\PhotoToText\lib\site-packages\gradio\utils.py", line 691, in gen_wrapper
yield from f(args, kwargs)
File "E:\01_Practice_materials\meta-llama\docker-llama2-chat\llama2-7b\app.py", line 73, in generate
first_response = next(generator)
File "E:\01_Practice_materials\meta-llama\docker-llama2-chat\llama2-7b\model.py", line 58, in run
for text in streamer:
File "C:\Users\Logo.conda\envs\PhotoToText\lib\site-packages\transformers\generation\streamers.py", line 223, in next
value = self.text_queue.get(timeout=self.timeout)
File "C:\Users\Logo.conda\envs\PhotoToText\lib\queue.py", line 179, in get
raise Empty
_queue.Empty
看起来你是直接在 Windows 环境中运行,不是使用文章中一致性较高的容器作为运行环境,推测可能和不同环境下的 PyPI 包的细节表现有关。
你可以参考下面的 issue 的解决方式,比较大概率是你的 transformers 的版本有问题。
https://github.com/huggingface/transformers/issues/19290
或者也可以参考其中的部分回答,调整参数。
如果上面的一切都你来说都比较复杂的话,跟着文章一致的实践吧,也是最简单、一致性最高的方式。
谢谢大佬 非常感谢您的项目, 就是您说的包依赖问题,修改过后,可以在本地运行了,那如果我想这个程序部署到云平台服务器上的话,只用把dockerfile修改就可以吧 因为云平台服务器不支持指令创建启动容器。您觉得这样可行吗
大佬您好!我参考的用 Docker 容器快速上手 Meta AI 出品的 LLaMA2 开源大模型。这篇文章
但是我出现了ValueError: The following
model_kwargs
are not used by the model: ['token_type_ids'] (note: typos in the generate arguments will also show up in this list)这个错误是怎么回事,已经可以本地访问Gradio了