用的是main分支,报错的具体堆栈
This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run gradio deploy from Terminal to deploy to Spaces (https://huggingface.co/spaces)
Traceback (most recent call last):
File "/home/miniconda3/envs/visualglm-6B/lib/python3.7/site-packages/gradio/routes.py", line 444, in run_predict
event_data=event_data,
File "/home/miniconda3/envs/visualglm-6B/lib/python3.7/site-packages/gradio/blocks.py", line 1347, in process_api
fn_index, inputs, iterator, request, event_id, event_data
File "/home/miniconda3/envs/visualglm-6B/lib/python3.7/site-packages/gradio/blocks.py", line 1090, in call_function
prediction = await utils.async_iteration(iterator)
File "/home/miniconda3/envs/visualglm-6B/lib/python3.7/site-packages/gradio/utils.py", line 341, in async_iteration
return await iterator.anext()
File "/home/miniconda3/envs/visualglm-6B/lib/python3.7/site-packages/gradio/utils.py", line 335, in anext
run_sync_iterator_async, self.iterator, limiter=self.limiter
File "/home/miniconda3/envs/visualglm-6B/lib/python3.7/site-packages/anyio/to_thread.py", line 34, in run_sync
func, args, cancellable=cancellable, limiter=limiter
File "/home/miniconda3/envs/visualglm-6B/lib/python3.7/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/home/miniconda3/envs/visualglm-6B/lib/python3.7/site-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, args)
File "/home/miniconda3/envs/visualglm-6B/lib/python3.7/site-packages/gradio/utils.py", line 317, in run_sync_iterator_async
return next(iterator)
File "web_demo_hf.py", line 73, in predict_new_image
temperature=temperature):
File "/home/miniconda3/envs/visualglm-6B/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 34, in generator_context
gen = func(*args, **kwargs)
TypeError: stream_chat() got multiple values for argument 'max_length'
用的是main分支,报错的具体堆栈 This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run
gradio deploy
from Terminal to deploy to Spaces (https://huggingface.co/spaces) Traceback (most recent call last): File "/home/miniconda3/envs/visualglm-6B/lib/python3.7/site-packages/gradio/routes.py", line 444, in run_predict event_data=event_data, File "/home/miniconda3/envs/visualglm-6B/lib/python3.7/site-packages/gradio/blocks.py", line 1347, in process_api fn_index, inputs, iterator, request, event_id, event_data File "/home/miniconda3/envs/visualglm-6B/lib/python3.7/site-packages/gradio/blocks.py", line 1090, in call_function prediction = await utils.async_iteration(iterator) File "/home/miniconda3/envs/visualglm-6B/lib/python3.7/site-packages/gradio/utils.py", line 341, in async_iteration return await iterator.anext() File "/home/miniconda3/envs/visualglm-6B/lib/python3.7/site-packages/gradio/utils.py", line 335, in anext run_sync_iterator_async, self.iterator, limiter=self.limiter File "/home/miniconda3/envs/visualglm-6B/lib/python3.7/site-packages/anyio/to_thread.py", line 34, in run_sync func, args, cancellable=cancellable, limiter=limiter File "/home/miniconda3/envs/visualglm-6B/lib/python3.7/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "/home/miniconda3/envs/visualglm-6B/lib/python3.7/site-packages/anyio/_backends/_asyncio.py", line 807, in run result = context.run(func, args) File "/home/miniconda3/envs/visualglm-6B/lib/python3.7/site-packages/gradio/utils.py", line 317, in run_sync_iterator_async return next(iterator) File "web_demo_hf.py", line 73, in predict_new_image temperature=temperature): File "/home/miniconda3/envs/visualglm-6B/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 34, in generator_context gen = func(*args, **kwargs) TypeError: stream_chat() got multiple values for argument 'max_length'