towhee-io / towhee

Towhee is a framework that is dedicated to making neural data processing pipelines simple and fast.
https://towhee.io
Apache License 2.0
3.15k stars 246 forks source link

[Bug]: Run operator question-answer failed reporting 'Response' object has no attribute 'context' #2576

Closed NicoYuan1986 closed 1 year ago

NicoYuan1986 commented 1 year ago

Is there an existing issue for this?

Current Behavior

Run operator question-answer failed reporting 'Response' object has no attribute 'context'. https://towhee.io/prompt/question-answer image

image

Expected Behavior

pass

Steps To Reproduce

No response

Environment

- Towhee version(e.g. v0.1.3 or 8b23a93):
- OS(Ubuntu or CentOS):
- CPU/Memory:
- GPU:
- Others:

Anything else?

No response

junjiejiangjjj commented 1 year ago

fixed

NicoYuan1986 commented 1 year ago

The 'context' error was fixed. But another error occurred. @junjiejiangjjj

https://towhee.io/prompt/question-answer

image

>>> an1 = p('Tell me something about Towhee', towhee_docs, []).get()[0]

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/zilliz/virtual-environment/towhee/lib/python3.10/site-packages/towhee/runtime/runtime_pipeline.py", line 159, in __call__
    return self._call(*inputs, profiler=False, tracer=False)[0]
  File "/Users/zilliz/virtual-environment/towhee/lib/python3.10/site-packages/towhee/runtime/runtime_pipeline.py", line 177, in _call
    return graph(inputs), [graph.time_profiler] if profiler else None, [graph.data_queues] if tracer else None
  File "/Users/zilliz/virtual-environment/towhee/lib/python3.10/site-packages/towhee/runtime/runtime_pipeline.py", line 116, in __call__
    return f.result()
  File "/Users/zilliz/virtual-environment/towhee/lib/python3.10/site-packages/towhee/runtime/runtime_pipeline.py", line 34, in result
    ret = self._graph.result()
  File "/Users/zilliz/virtual-environment/towhee/lib/python3.10/site-packages/towhee/runtime/runtime_pipeline.py", line 95, in result
    raise RuntimeError(errs)
RuntimeError: Node-LLM/OpenAI-1 runs failed, error msg: This model's maximum context length is 4097 tokens. However, your messages resulted in 5422 tokens. Please reduce the length of the messages., Traceback (most recent call last):
  File "/Users/zilliz/virtual-environment/towhee/lib/python3.10/site-packages/towhee/runtime/nodes/node.py", line 158, in _call
    return True, self._op(*inputs), None
  File "/Users/zilliz/.towhee/operators/LLM/OpenAI/versions/main/openai_chat.py", line 36, in __call__
    response = openai.ChatCompletion.create(
  File "/Users/zilliz/virtual-environment/towhee/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/Users/zilliz/virtual-environment/towhee/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/Users/zilliz/virtual-environment/towhee/lib/python3.10/site-packages/openai/api_requestor.py", line 230, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/Users/zilliz/virtual-environment/towhee/lib/python3.10/site-packages/openai/api_requestor.py", line 624, in _interpret_response
    self._interpret_response_line(
  File "/Users/zilliz/virtual-environment/towhee/lib/python3.10/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 5422 tokens. Please reduce the length of the messages.
, Traceback (most recent call last):
  File "/Users/zilliz/virtual-environment/towhee/lib/python3.10/site-packages/towhee/runtime/nodes/node.py", line 171, in process
    self.process_step()
  File "/Users/zilliz/virtual-environment/towhee/lib/python3.10/site-packages/towhee/runtime/nodes/_map.py", line 63, in process_step
    assert succ, msg
AssertionError: This model's maximum context length is 4097 tokens. However, your messages resulted in 5422 tokens. Please reduce the length of the messages., Traceback (most recent call last):
  File "/Users/zilliz/virtual-environment/towhee/lib/python3.10/site-packages/towhee/runtime/nodes/node.py", line 158, in _call
    return True, self._op(*inputs), None
  File "/Users/zilliz/.towhee/operators/LLM/OpenAI/versions/main/openai_chat.py", line 36, in __call__
    response = openai.ChatCompletion.create(
  File "/Users/zilliz/virtual-environment/towhee/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/Users/zilliz/virtual-environment/towhee/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/Users/zilliz/virtual-environment/towhee/lib/python3.10/site-packages/openai/api_requestor.py", line 230, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/Users/zilliz/virtual-environment/towhee/lib/python3.10/site-packages/openai/api_requestor.py", line 624, in _interpret_response
    self._interpret_response_line(
  File "/Users/zilliz/virtual-environment/towhee/lib/python3.10/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 5422 tokens. Please reduce the length of the messages.
NicoYuan1986 commented 1 year ago

Same: https://towhee.io/prompt/template

openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 5379 tokens. Please reduce the length of the messages.

junjiejiangjjj commented 1 year ago

Have revised the sample code, please try again

NicoYuan1986 commented 1 year ago

The above error is fixed!

A new error appeared. https://towhee.io/prompt/template

image

>>> 
>>> p = (
...     pipe.input('question', 'doc', 'history')
...     .map('docs', 'docs', lambda x: x[:2000])
...     .map(('question', 'doc', 'history'), 'prompt', ops.prompt.template(temp, ['question', 'context']))
...     .map('prompt', 'answer', ops.LLM.OpenAI())
...     .output('answer')
... )
Traceback (most recent call last):
  File "<stdin>", line 6, in <module>
  File "/Users/zilliz/virtual-environment/towhee/lib/python3.10/site-packages/towhee/runtime/pipeline.py", line 100, in output
    run_pipe = RuntimePipeline(dag_dict)
  File "/Users/zilliz/virtual-environment/towhee/lib/python3.10/site-packages/towhee/runtime/runtime_pipeline.py", line 143, in __init__
    self._dag_repr = DAGRepr.from_dict(dag)
  File "/Users/zilliz/virtual-environment/towhee/lib/python3.10/site-packages/towhee/runtime/dag_repr.py", line 380, in from_dict
    DAGRepr.check_nodes(nodes, top_sort)
  File "/Users/zilliz/virtual-environment/towhee/lib/python3.10/site-packages/towhee/runtime/dag_repr.py", line 83, in check_nodes
    check_set(nodes[name].inputs, set(all_inputs[name]))
  File "/Users/zilliz/virtual-environment/towhee/lib/python3.10/site-packages/towhee/runtime/check_utils.py", line 69, in check_set
    raise ValueError(f'The DAG Nodes inputs {str(inputs)} is not valid, which is not declared: {inputs - all_inputs}.')
ValueError: The DAG Nodes inputs {'docs'} is not valid, which is not declared: {'docs'}.
>>> 
NicoYuan1986 commented 1 year ago

Fixed.