Closed ZhaoLongjiea closed 3 weeks ago
Can you use the following tutorial to test whether your API is correct?
You can also try different API Version by setting the parameter '--LLM_model_name':
parser.add_argument(
"--LLM_model_name",
type=str,
default="gpt-4-1106-preview",
help="the Large language model. also like: [gpt-4o-mini-2024-07-18, gpt-4-turbo]",
)
If there are no further questions, I will close the issue now.
If you have any questions later, you can give feedback at any time.
Thank you for your kindly reply.
I have try my API following this command. It gives me the right answer. I have verified my API is correct. And when I print the pathway of my photos or send some words to chatgpt, it still gives me this: resp, got_stream = self._interpret_response(result, stream) File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/openai/api_requestor.py", line 700, in _interpret_response self._interpret_response_line( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/openai/api_requestor.py", line 765, in _interpret_response_line raise self.handle_error_response( openai.error.AuthenticationError: Incorrect API key provided: {sk-proj**b8A}. You can find your API key at https://platform.openai.com/account/api-keys.
Verify example:
import openai
openai.api_key = "sk-prXXXXXXXXXX"
completion = openai.ChatCompletion.create( model="gpt-3.5-turbo", # You can also use "gpt-3.5-turbo" or others depending on availability messages=[ {"role": "system", "content": "You are a helpful assistant."}, { "role": "user", "content": "count 1 to 10." } ] )
print(completion['choices'][0]['message']['content'])
(CE3D) (base) lzha0538@gpu1-3w-1:~/3Dediting/CE3D$ /home/lzha0538/miniconda3/envs/gsgen/bin/python "/home/lzha0538/3Dediting/CE3D/API Example.py" Sure! Here you go:
1, 2, 3, 4, 5, 6, 7, 8, 9, 10.
Have you set your environment in your terminal?
export OPENAI_API_KEY=XXX
please replace the 'XXX' with your API key.
Have you set your environment in your terminal?
export OPENAI_API_KEY=XXX
please replace the 'XXX' with your API key.
Thank you for your kindly reply! yes, I have set my environment in my terminal, and then I have confirmed this, which version of chatgpt is in your model? When I print some texts in WEBUI. It still not works well.
Could you provide the full running command? like below:
python3 chat_edit_3D.py --port 7862 --clean_FBends --load "Segmenting_cuda:0,\
ImageCaptioning_cuda:0,VisualQuestionAnswering_cuda:0,Text2Box_cuda:0,\
Inpainting_cuda:0,InstructPix2Pix_cuda:0"
I use the API of gpt-4-1106-preview, but gpt-3.5-turbo also works for me. If you use gpt-3.5-turbo, please use the following command:
python3 chat_edit_3D.py --LLM_model_name gpt-3.5-turbo --port 7862 --clean_FBends --load "Segmenting_cuda:0,\
ImageCaptioning_cuda:0,VisualQuestionAnswering_cuda:0,Text2Box_cuda:0,\
Inpainting_cuda:0,InstructPix2Pix_cuda:0"
Thanks a lot.
I continue trying to debugging, and finally solve this problem. I just follow the discussion and instruction here:https://github.com/Significant-Gravitas/AutoGPT/issues/1422
And I don't know which reply helps me. When I prepare for going back home, I just turn off the linux system and reboot it. And it gives me this result.
It looks like at least Openai is working now. You could try making scene edits. The error shown in the picture looks like the LLM is not returning the result of the execution correctly, so you could try a dialog with a different command.
Yes, I have tried with a different command. And I found sometimes when I input a command, it gave me an error. I'm following the small model. If I use the bigger(100GB) one, will the bigger one be more stable? python3 chat_edit_3D.py --LLM_model_name gpt-3.5-turbo --port 7862 --clean_FBends --load "Segmenting_cuda:0,\ ImageCaptioning_cuda:0,VisualQuestionAnswering_cuda:0,Text2Box_cuda:0,\ Inpainting_cuda:0,InstructPix2Pix_cuda:0"
Entering new AgentExecutor chain... Action: Segment the Scene Action Input: xxx.scn[ WARN:3@19933.199] global loadsave.cpp:241 findDecoder imread_('xxx.scn'): can't open/read file: check file path/integrity Traceback (most recent call last): File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/gradio/queueing.py", line 622, in process_events response = await route_utils.call_process_api( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api output = await app.get_blocks().process_api( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/gradio/blocks.py", line 2016, in process_api result = await self.call_function( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/gradio/blocks.py", line 1569, in call_function prediction = await anyio.to_thread.run_sync( # type: ignore File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2405, in run_sync_in_worker_thread return await future File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 914, in run result = context.run(func, args) File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/gradio/utils.py", line 846, in wrapper response = f(args, kwargs) File "/home/lzha0538/3Dediting/CE3D/chat_edit_3D.py", line 2924, in run_text res = self.agent( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper return wrapped(*args, *kwargs) File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/chains/base.py", line 363, in call return self.invoke( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/chains/base.py", line 162, in invoke raise e File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/chains/base.py", line 156, in invoke self._call(inputs, run_manager=run_manager) File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/agents/agent.py", line 1376, in _call next_step_output = self._take_next_step( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/agents/agent.py", line 1102, in _take_next_step [ File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/agents/agent.py", line 1102, in
[ File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/agents/agent.py", line 1198, in _iter_next_step observation = tool.run( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain_core/tools.py", line 401, in run raise e File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain_core/tools.py", line 358, in run self._run( tool_args, run_manager=run_manager, tool_kwargs) File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain_core/tools.py", line 566, in _run else self.func(*args, **kwargs) File "/home/lzha0538/3Dediting/CE3D/chat_edit_3D.py", line 2115, in inference_all image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) cv2.error: OpenCV(4.10.0) /io/opencv/modules/imgproc/src/color.cpp:196: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'
Yes, the big one supports more functions. May be you could try the following command:
make run-all
Is this issue caused by ChatGPT's inability to accurately analyze the text prompt, or is there another underlying reason?
I tend to think this is the reason for ChatGPT since I haven't encountered this situation yet. I want to know what API are you using? And whether you can try another conversation, such as "turn the scene of XXX.scn into Van Gogh style"
This is what API I'm using. And when I input this "turn the scene of 11042146.scn into Van Gogh style", it gives me this result:
Entering new AgentExecutor chain... Thought: Do I need to use a tool? Yes Action: Instruct Scene Using Text Action Input: 11042146.scn, turn the scene into Van Gogh style===>Starting InstructPix2Pix Inference Traceback (most recent call last): File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/gradio/queueing.py", line 622, in process_events response = await route_utils.call_process_api( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api output = await app.get_blocks().process_api( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/gradio/blocks.py", line 2016, in process_api result = await self.call_function( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/gradio/blocks.py", line 1569, in call_function prediction = await anyio.to_thread.run_sync( # type: ignore File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2405, in run_sync_in_worker_thread return await future File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 914, in run result = context.run(func, args) File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/gradio/utils.py", line 846, in wrapper response = f(args, kwargs) File "/home/lzha0538/3Dediting/CE3D/chat_edit_3D.py", line 2924, in run_text res = self.agent( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper return wrapped(*args, *kwargs) File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/chains/base.py", line 363, in call return self.invoke( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/chains/base.py", line 162, in invoke raise e File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/chains/base.py", line 156, in invoke self._call(inputs, run_manager=run_manager) File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/agents/agent.py", line 1376, in _call next_step_output = self._take_next_step( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/agents/agent.py", line 1102, in _take_next_step [ File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/agents/agent.py", line 1102, in
[ File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/agents/agent.py", line 1198, in _iter_next_step observation = tool.run( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain_core/tools.py", line 401, in run raise e File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain_core/tools.py", line 358, in run self._run( tool_args, run_manager=run_manager, tool_kwargs) File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain_core/tools.py", line 566, in _run else self.func(*args, **kwargs) File "/home/lzha0538/3Dediting/CE3D/chat_edit_3D.py", line 1398, in inference inputs.split(",")[2].strip(), IndexError: list index out of range
When I input "describe the image", it gives me this:
Entering new AgentExecutor chain... Thought: Do I need to use a tool? Yes Action: Get Scene Description Action Input: xxx.scnTraceback (most recent call last): File "/home/lzha0538/3Dediting/CE3D/chat_edit_3D.py", line 430, in inference Image.open(image_path).convert("RGB"), return_tensors="pt" File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/PIL/Image.py", line 3431, in open fp = builtins.open(filename, "rb") FileNotFoundError: [Errno 2] No such file or directory: '/home/lzha0538/3Dediting/CE3D/xxx.scn'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/gradio/queueing.py", line 622, in process_events
response = await route_utils.call_process_api(
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api
output = await app.get_blocks().process_api(
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/gradio/blocks.py", line 2016, in process_api
result = await self.call_function(
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/gradio/blocks.py", line 1569, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2405, in run_sync_in_worker_thread
return await future
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 914, in run
result = context.run(func, args)
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/gradio/utils.py", line 846, in wrapper
response = f(args, kwargs)
File "/home/lzha0538/3Dediting/CE3D/chat_edit_3D.py", line 2924, in run_text
res = self.agent(
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, *kwargs)
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/chains/base.py", line 363, in call
return self.invoke(
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/chains/base.py", line 162, in invoke
raise e
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/agents/agent.py", line 1376, in _call
next_step_output = self._take_next_step(
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/agents/agent.py", line 1102, in _take_next_step
[
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/agents/agent.py", line 1102, in
I'm using the flower dataset that you have provided in the readme.
You have two errors involved here and the causes are as follows:
One way to modify this is to set objects to a fixed value, e.g. objects="entire".
You can also send your wechat to me if you have one (My email is : skfang@buaa.edu.cn). Then we can discuss it further.
I have provided my newest API, but it reflects this is incorrect. It is really wired
Entering new AgentExecutor chain... Traceback (most recent call last): File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/gradio/queueing.py", line 622, in process_events response = await route_utils.call_process_api( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api output = await app.get_blocks().process_api( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/gradio/blocks.py", line 2016, in process_api result = await self.call_function( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/gradio/blocks.py", line 1569, in call_function prediction = await anyio.to_thread.run_sync( # type: ignore File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2405, in run_sync_in_worker_thread return await future File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 914, in run result = context.run(func, args) File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/gradio/utils.py", line 846, in wrapper response = f(args, kwargs) File "/home/lzha0538/Inpainting/CE3D/chat_edit_3D.py", line 2921, in run_text res = self.agent( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper return wrapped(args, kwargs) File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/chains/base.py", line 363, in call return self.invoke( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/chains/base.py", line 162, in invoke raise e File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/chains/base.py", line 156, in invoke self._call(inputs, run_manager=run_manager) File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/agents/agent.py", line 1376, in _call next_step_output = self._take_next_step( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/agents/agent.py", line 1102, in _take_next_step [ File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/agents/agent.py", line 1102, in
[
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/agents/agent.py", line 1130, in _iter_next_step
output = self.agent.plan(
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/agents/agent.py", line 699, in plan
full_output = self.llm_chain.predict(callbacks=callbacks, full_inputs)
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/chains/llm.py", line 293, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(args, kwargs)
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/chains/base.py", line 363, in call
return self.invoke(
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/chains/base.py", line 162, in invoke
raise e
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/chains/llm.py", line 103, in _call
response = self.generate([inputs], run_manager=run_manager)
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain/chains/llm.py", line 115, in generate
return self.llm.generate_prompt(
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 568, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, kwargs)
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 741, in generate
output = self._generate_helper(
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 605, in _generate_helper
raise e
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 592, in _generate_helper
self._generate(
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain_community/llms/openai.py", line 1159, in _generate
full_response = completion_with_retry(
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain_community/llms/openai.py", line 123, in completion_with_retry
return _completion_with_retry(kwargs)
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/tenacity/init.py", line 336, in wrapped_f
return copy(f, *args, kw)
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/tenacity/init.py", line 475, in call
do = self.iter(retry_state=retry_state)
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/tenacity/init.py", line 376, in iter
result = action(retry_state)
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/tenacity/init.py", line 398, in
self._add_action_func(lambda rs: rs.outcome.result())
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.get_result()
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/concurrent/futures/_base.py", line 403, in get_result
raise self._exception
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/tenacity/init.py", line 478, in call
result = fn(args, kwargs)
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/langchain_community/llms/openai.py", line 121, in _completion_with_retry
return llm.client.create(kwargs)
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(args, kwargs)
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/openai/api_resources/abstract/engine_apiresource.py", line 153, in create
response, , api_key = requestor.request(
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/openai/api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/openai/api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/openai/api_requestor.py", line 765, in _interpret_response_line
raise self.handle_error_response(
openai.error.AuthenticationError: Incorrect API key provided: {sk-proj**b8A}. You can find your API key at https://platform.openai.com/account/api-keys.