Open muzhi1991 opened 11 months ago
This issue is stale because it has been open 3 days with no activity. Remove stale label or comment or this will be closed in 4 days.
Hi, I believe we designed so. Could you point out what problems it could lead to?
On my macOS, if tool selected enable, llm chain will not raise exception,(trace it raise exception because this mac python problems),so i switch to spawn mode at begin of main.py (as code in __name__ == "__main__"
). After the modification, the request is normal.
However, another problem occurs, subprocess will never end so chat_thread.is_alive()
is always true.
Finally I found that When I use CODE_EXECUTION_MODE == 'docker'
, subprocess will always trigger this start_kernel_publisher
thread, and this thread will not stop. If I disable this mode, The child process exits normally.
in this method multiprocess.Process will copy anther "executor" parameter for runing ,those address or id(executor) != id(executor)
On mac os set multiprocess.set_start_method("fork", True)
When I check code about Chat Streaming, This variable seems to be shared by multiple processes, I also find this variable is assigned by another process. But
is_end
is local variable. There may be some problems with this code, or is it designed like this?https://github.com/xlang-ai/OpenAgents/blob/880e26adfe380e999962fc645fc8fc80bd72f103/backend/utils/streaming.py#L266-L273
I am also confused that if this var changes to sharable, logic seems to be broken. Author's design should control this stream stop when chat_thread(this is a python process) not alive. When I use MacOS(m1, version 14.1.2 ) for test, this
is_alive
may not work, This may another problem.