OpenDriveLab / DriveLM

[ECCV 2024 Oral] DriveLM: Driving with Graph Visual Question Answering
https://opendrivelab.com/DriveLM/
Apache License 2.0
798 stars 49 forks source link

Got error when running evaluation.py #62

Closed uni-zhuan closed 2 months ago

uni-zhuan commented 5 months ago

when running evaluation.py I encountered a TypeError related to the multiprocessing process.

evaluation start! Exception in thread Thread-3: Traceback (most recent call last): File "/Users/unizhuan/anaconda3/envs/llama_adapter_v2/lib/python3.8/threading.py", line 932, in _bootstrap_inner self.run() File "/Users/unizhuan/anaconda3/envs/llama_adapter_v2/lib/python3.8/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/Users/unizhuan/anaconda3/envs/llama_adapter_v2/lib/python3.8/multiprocessing/pool.py", line 576, in _handle_results task = get() File "/Users/unizhuan/anaconda3/envs/llama_adapter_v2/lib/python3.8/multiprocessing/connection.py", line 251, in recv return _ForkingPickler.loads(buf.getbuffer()) TypeError: __init__() takes 1 positional argument but 2 were given Process SpawnPoolWorker-24: Process SpawnPoolWorker-20: Process SpawnPoolWorker-21: Process SpawnPoolWorker-30: Process SpawnPoolWorker-8: Process SpawnPoolWorker-22: ....

wondering why and how to solve this🤔

DevLinyan commented 5 months ago

Could you provide more details such as your files and code? I can not reproduce your error now.

yabuke commented 5 months ago

Solved? I met same problem.

yabuke commented 5 months ago

Could you provide more details such as your files and code? I can not reproduce your error now.

    with Pool(32) as p:  # Change the number based on your CPU cores
        scores = p.map(self.chatgpt_eval.forward, data)

I tested the code, found above those code is the reason in the evaluation.py file. But i don't konw how to solve it.

piqiuni commented 5 months ago

@ChonghaoSima I got the same error, Caused by the merge pull request #60.

ChonghaoSima commented 5 months ago

this is to accelerate the evaluation process, as openai_api has a maximum rps which we want to fully utilize. you can decrease the number of threads (32 here) on your local machine for better experience

piqiuni commented 5 months ago

@ChonghaoSima Change the Pool num to 1 still leads to the error.

DevLinyan commented 5 months ago

@piqiuni If it doesn't work, you may revert to the previous version of the code prior to the pull request #60.

piqiuni commented 5 months ago

@DevLinyan I run the gpt_eval.py and got You tried to access openai.ChatCompletion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API., I will try to change the code. Seems still caused by the #53 I mentioned before.

Thanks!

piqiuni commented 5 months ago

Changing call_chatgpt in gpt_eval.py:

    def call_chatgpt(self, chatgpt_messages, max_tokens=40, model="gpt-3.5-turbo"):
        response = openai.chat.completions.create(
            model=model, messages=chatgpt_messages, temperature=0.6, max_tokens=max_tokens
        )
        reply = response.choices[0].message.content
        total_tokens = response.usage.total_tokens

        return reply, total_tokens

And change prompts to:

  prompts = "Rate my answer based on the correct answer out of 100, with higher scores indicating that the answer is closer to the correct answer, and you should be accurate to single digits like 62, 78, 41,etc. Output the number only! Output the number only! If the answer does not correspond to the correct answer provided, give a 0"

Solved the problem. I think you should give the certain version of openai package in requirements.txt