openai / human-eval

Code for the paper "Evaluating Large Language Models Trained on Code"
MIT License
2.43k stars 349 forks source link

Error running evaluate_functional_correctness samples.json #48

Open nextdoorUncleLiu opened 2 months ago

nextdoorUncleLiu commented 2 months ago

Error running evaluate_functional_correctness samples.json When I was running evaluate_functional_correctness samples.json, I received the following error message

Reading samples...
164it [00:00, 62612.95it/s]
Running test suites...
  0%|                                                                                                                                                                              | 0/164 [00:00<?, ?it/s]Reading samples...
164it [00:00, 53899.53it/s]
Running test suites...
  0%|                                                                                                                                                                              | 0/164 [00:00<?, ?it/s]Reading samples...
Reading samples...
  0%|                                                                                                                                                                              | 0/164 [00:00<?, ?it/s]
164it [00:00, 33420.75it/s]
Running test suites...
164it [00:00, 37077.72it/s]
Running test suites...
  0%|                                                                                                                                                                              | 0/164 [00:00<?, ?it/s]
  0%|                                                                                                                                                                              | 0/164 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 125, in _main
    prepare(preparation_data)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 236, in prepare
    _fixup_main_from_path(data['init_main_from_path'])
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
    main_content = runpy.run_path(main_path,
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 265, in run_path
    return _run_module_code(code, init_globals, run_name,
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 97, in _run_module_code
    _run_code(code, mod_globals, init_globals,
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/Users/user/Desktop/human-eval/.venv/bin/evaluate_functional_correctness", line 5, in <module>
    from human_eval.evaluate_functional_correctness import main
  File "/Users/user/Desktop/human-eval/.venv/lib/python3.8/site-packages/human_eval/evaluate_functional_correctness.py", line 28, in <module>
    sys.exit(main())
  File "/Users/user/Desktop/human-eval/.venv/lib/python3.8/site-packages/human_eval/evaluate_functional_correctness.py", line 25, in main
    fire.Fire(entry_point)
  File "/Users/user/Desktop/human-eval/.venv/lib/python3.8/site-packages/fire/core.py", line 143, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/Users/user/Desktop/human-eval/.venv/lib/python3.8/site-packages/fire/core.py", line 477, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/Users/user/Desktop/human-eval/.venv/lib/python3.8/site-packages/fire/core.py", line 693, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/Users/user/Desktop/human-eval/.venv/lib/python3.8/site-packages/human_eval/evaluate_functional_correctness.py", line 20, in entry_point
    results = evaluate_functional_correctness(sample_file, k, n_workers, timeout, problem_file)
  File "/Users/user/Desktop/human-eval/.venv/lib/python3.8/site-packages/human_eval/evaluation.py", line 77, in evaluate_functional_correctness
    result = future.result()
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/concurrent/futures/_base.py", line 437, in result
    return self.__get_result()
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/concurrent/futures/_base.py", line 389, in __get_result
    raise self._exception
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/Users/user/Desktop/human-eval/.venv/lib/python3.8/site-packages/human_eval/execution.py", line 58, in check_correctness
    with Manager() as manager:
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py", line 57, in Manager
Reading samples...
    m.start()
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/managers.py", line 579, in start
    self._process.start()
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py", line 121, in start
    self._popen = self._Popen(self)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py", line 284, in _Popen
    return Popen(process_obj)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
    super().__init__(process_obj)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
    self._launch(process_obj)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 42, in _launch
    prep_data = spawn.get_preparation_data(process_obj._name)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 154, in get_preparation_data
    _check_not_importing_main()
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 134, in _check_not_importing_main
    raise RuntimeError('''
RuntimeError: 
        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:

            if __name__ == '__main__':
                freeze_support()
                ...

        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.
0it [00:00, ?it/s]Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 125, in _main
    prepare(preparation_data)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 236, in prepare
    _fixup_main_from_path(data['init_main_from_path'])
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
    main_content = runpy.run_path(main_path,
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 265, in run_path

I have also tried using it https://github.com/openai/human-eval/issues/18 Method, but it cannot solve my problem