Closed abdarwish23 closed 2 months ago
i am wondering the same
Is there an option to stop the crew execution, rather than ctrl+c from the terminal as this kills the whole app
I solved it by starting a new process for the crew execution, and terminating the process when it needed to be stopped.
import multiprocessing
class MyCrew:
def __init__(self, crew: Crew) -> None:
self.process: multiprocessing.Process = None
self.crew = crew
def _execution_job(self):
self.crew.kickoff()
def execute(self) -> None:
if self.process is not None and self.process.is_alive():
# Check if there is already an active process, if so it should be terminated
self.process.terminate()
self.process = multiprocessing.Process(
target=self._execution_job,
)
self.process.start()
def stop(self) -> None:
if self.process is not None and self.process.is_alive():
self.process.terminate()
I'm not sure if this is a good way
there are 34 PRs which are not merged, people should fork the project IMHO, the maintainers possibly do not have enough time to take care of such an intensive buzz around the project, I hope we can fork it and start merging this. but that's a lot of work, maybe some organization could take care of this 🧐 . the idea and concept is great, but without enough resources to review and merge suggestions - it goes nowhere.
Is there an option to stop the crew execution, rather than ctrl+c from the terminal as this kills the whole app
I solved it by starting a new process for the crew execution, and terminating the process when it needed to be stopped.
import multiprocessing class MyCrew: def __init__(self, crew: Crew) -> None: self.process: multiprocessing.Process = None self.crew = crew def _execution_job(self): self.crew.kickoff() def execute(self) -> None: if self.process is not None and self.process.is_alive(): # Check if there is already an active process, if so it should be terminated self.process.terminate() self.process = multiprocessing.Process( target=self._execution_job, ) self.process.start() def stop(self) -> None: if self.process is not None and self.process.is_alive(): self.process.terminate()
I'm not sure if this is a good way
in your example, at what point do you know it's time to call the 'stop' for it? what is the signal which indicates completion? I guess will be using your approach because there's no alternative at the moment, the crew gets stuck but I need to run them one by one over a folder of files with content 😓 have no idea how to make it iterate if I don't know at which points it completes or 'still thinks', there has to be some event dispatched and subscribe to that event - are there events like that in python?
so a "step_callback" of the crew object itself, might be it, will try that :P
so a "step_callback" of the crew object itself, might be it, will try that :P
Good idea, I am currently using step_callbak, which will pass a signal to the main process after the last step is completed.
Is there an option to stop the crew execution, rather than ctrl+c from the terminal as this kills the whole app
I solved it by starting a new process for the crew execution, and terminating the process when it needed to be stopped.
import multiprocessing class MyCrew: def __init__(self, crew: Crew) -> None: self.process: multiprocessing.Process = None self.crew = crew def _execution_job(self): self.crew.kickoff() def execute(self) -> None: if self.process is not None and self.process.is_alive(): # Check if there is already an active process, if so it should be terminated self.process.terminate() self.process = multiprocessing.Process( target=self._execution_job, ) self.process.start() def stop(self) -> None: if self.process is not None and self.process.is_alive(): self.process.terminate()
I'm not sure if this is a good way
in your example, at what point do you know it's time to call the 'stop' for it? what is the signal which indicates completion? I guess will be using your approach because there's no alternative at the moment, the crew gets stuck but I need to run them one by one over a folder of files with content 😓 have no idea how to make it iterate if I don't know at which points it completes or 'still thinks', there has to be some event dispatched and subscribe to that event - are there events like that in python?
I currently use multiprocessing.Queue() to pass a signal at the last step of the crew to the main process. I'm thinking about using the event loop in Asyncio to solve it. Maybe that way would be better.
Is there an option to stop the crew execution, rather than ctrl+c from the terminal as this kills the whole app
I solved it by starting a new process for the crew execution, and terminating the process when it needed to be stopped.
import multiprocessing class MyCrew: def __init__(self, crew: Crew) -> None: self.process: multiprocessing.Process = None self.crew = crew def _execution_job(self): self.crew.kickoff() def execute(self) -> None: if self.process is not None and self.process.is_alive(): # Check if there is already an active process, if so it should be terminated self.process.terminate() self.process = multiprocessing.Process( target=self._execution_job, ) self.process.start() def stop(self) -> None: if self.process is not None and self.process.is_alive(): self.process.terminate()
I'm not sure if this is a good way
in your example, at what point do you know it's time to call the 'stop' for it? what is the signal which indicates completion? I guess will be using your approach because there's no alternative at the moment, the crew gets stuck but I need to run them one by one over a folder of files with content 😓 have no idea how to make it iterate if I don't know at which points it completes or 'still thinks', there has to be some event dispatched and subscribe to that event - are there events like that in python?
I currently use multiprocessing.Queue() to pass a signal at the last step of the crew to the main process. I'm thinking about using the event loop in Asyncio to solve it. Maybe that way would be better.
you mean in the callback of the last agent? (it'll work only in sequential mode, correct?)
Is there an option to stop the crew execution, rather than ctrl+c from the terminal as this kills the whole app
I solved it by starting a new process for the crew execution, and terminating the process when it needed to be stopped.
import multiprocessing class MyCrew: def __init__(self, crew: Crew) -> None: self.process: multiprocessing.Process = None self.crew = crew def _execution_job(self): self.crew.kickoff() def execute(self) -> None: if self.process is not None and self.process.is_alive(): # Check if there is already an active process, if so it should be terminated self.process.terminate() self.process = multiprocessing.Process( target=self._execution_job, ) self.process.start() def stop(self) -> None: if self.process is not None and self.process.is_alive(): self.process.terminate()
I'm not sure if this is a good way
in your example, at what point do you know it's time to call the 'stop' for it? what is the signal which indicates completion? I guess will be using your approach because there's no alternative at the moment, the crew gets stuck but I need to run them one by one over a folder of files with content 😓 have no idea how to make it iterate if I don't know at which points it completes or 'still thinks', there has to be some event dispatched and subscribe to that event - are there events like that in python?
I currently use multiprocessing.Queue() to pass a signal at the last step of the crew to the main process. I'm thinking about using the event loop in Asyncio to solve it. Maybe that way would be better.
you mean in the callback of the last agent? (it'll work only in sequential mode, correct?)
Yeah, but if it is not sure which is the last agent when using the hierarchical mode, you can consider adding some post-processing to pass the signal after calling crew.kickoff()
hey folks, just stopping by to let you all know we will be catchin up to this, this week, we now have more people on our team helping out, so we should be able to better catch up to PRs :)
Em seg., 25 de mar. de 2024 às 01:19, Peter Ma @.***> escreveu:
Is there an option to stop the crew execution, rather than ctrl+c from the terminal as this kills the whole app
I solved it by starting a new process for the crew execution, and terminating the process when it needed to be stopped.
import multiprocessing
class MyCrew:
def __init__(self, crew: Crew) -> None: self.process: multiprocessing.Process = None self.crew = crew def _execution_job(self): self.crew.kickoff() def execute(self) -> None: if self.process is not None and self.process.is_alive(): # Check if there is already an active process, if so it should be terminated self.process.terminate() self.process = multiprocessing.Process( target=self._execution_job, ) self.process.start() def stop(self) -> None: if self.process is not None and self.process.is_alive(): self.process.terminate()
I'm not sure if this is a good way
in your example, at what point do you know it's time to call the 'stop' for it? what is the signal which indicates completion? I guess will be using your approach because there's no alternative at the moment, the crew gets stuck but I need to run them one by one over a folder of files with content 😓 have no idea how to make it iterate if I don't know at which points it completes or 'still thinks', there has to be some event dispatched and subscribe to that event - are there events like that in python?
I currently use multiprocessing.Queue() to pass a signal at the last step of the crew to the main process. I'm thinking about using the event loop in Asyncio to solve it. Maybe that way would be better.
you mean in the callback of the last agent? (it'll work only in sequential mode, correct?)
Yeah, but if it is not sure which is the last agent when using the hierarchical mode, you can consider adding some post-processing to pass the signal after calling crew.kickoff()
— Reply to this email directly, view it on GitHub https://github.com/joaomdmoura/crewAI/issues/374#issuecomment-2017181657, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFC3N6GHICTXRM26H2V3C3YZ6QWVAVCNFSM6AAAAABFEMI4VSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJXGE4DCNRVG4 . You are receiving this because you are subscribed to this thread.Message ID: @.***>
hey folks, just stopping by to let you all know we will be catchin up to this, this week, we now have more people on our team helping out, so we should be able to better catch up to PRs :) Em seg., 25 de mar. de 2024 às 01:19, Peter Ma @.***> escreveu:
hey great to hear from you, so the project is not abandoned yet. I saw many issue threads with no responses and PRs etc' so thought it's being left to rot lol 😅, while the concept is great. I'll be reporting some bugs, and I also keep telemetry on, hope it helps debugging etc.
haha not abandoned at all, sorry if it felt that way, we are actually a team of 4 now :D just that we have been pushing more stuff that was already on the roadmap before PRs, but will be closing the loop on these now
Em seg., 25 de mar. de 2024 às 07:25, aliensouls @.***> escreveu:
hey folks, just stopping by to let you all know we will be catchin up to this, this week, we now have more people on our team helping out, so we should be able to better catch up to PRs :) Em seg., 25 de mar. de 2024 às 01:19, Peter Ma @.***> escreveu:
hey great to hear from you, so the project is not abandoned yet. I saw many issue threads with no responses and PRs etc' so thought it's being left to rot lol 😅, while the concept is great. I'll be reporting some bugs, and I also keep telemetry on, hope it helps debugging etc.
— Reply to this email directly, view it on GitHub https://github.com/joaomdmoura/crewAI/issues/374#issuecomment-2017673018, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFC3N4VIGYI32R3BRY7SWLYZ73RFAVCNFSM6AAAAABFEMI4VSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJXGY3TGMBRHA . You are receiving this because you commented.Message ID: @.***>
haha not abandoned at all, sorry if it felt that way, we are actually a team of 4 now :D just that we have been pushing more stuff that was already on the roadmap before PRs, but will be closing the loop on these now Em seg., 25 de mar. de 2024 às 07:25, aliensouls @.> escreveu: … hey folks, just stopping by to let you all know we will be catchin up to this, this week, we now have more people on our team helping out, so we should be able to better catch up to PRs :) Em seg., 25 de mar. de 2024 às 01:19, Peter Ma @.> escreveu: hey great to hear from you, so the project is not abandoned yet. I saw many issue threads with no responses and PRs etc' so thought it's being left to rot lol 😅, while the concept is great. I'll be reporting some bugs, and I also keep telemetry on, hope it helps debugging etc. — Reply to this email directly, view it on GitHub <#374 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFC3N4VIGYI32R3BRY7SWLYZ73RFAVCNFSM6AAAAABFEMI4VSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJXGY3TGMBRHA . You are receiving this because you commented.Message ID: @.***>
I'm thrilled to hear about that. CrewAI has been incredibly helpful, and I'm genuinely grateful for its value. I'm excited about the potential for more functionality in the future. Keep up the fantastic work!
haha not abandoned at all, sorry if it felt that way, we are actually a team of 4 now :D just that we have been pushing more stuff that was already on the roadmap before PRs, but will be closing the loop on these now Em seg., 25 de mar. de 2024 às 07:25, aliensouls @.> escreveu: … hey folks, just stopping by to let you all know we will be catchin up to this, this week, we now have more people on our team helping out, so we should be able to better catch up to PRs :) Em seg., 25 de mar. de 2024 às 01:19, Peter Ma @.> escreveu: hey great to hear from you, so the project is not abandoned yet. I saw many issue threads with no responses and PRs etc' so thought it's being left to rot lol 😅, while the concept is great. I'll be reporting some bugs, and I also keep telemetry on, hope it helps debugging etc. — Reply to this email directly, view it on GitHub <#374 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFC3N4VIGYI32R3BRY7SWLYZ73RFAVCNFSM6AAAAABFEMI4VSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJXGY3TGMBRHA . You are receiving this because you commented.Message ID: @.***>
I'd like to PR the functionality we're discussing here about how to handle many 'crews' that are spawned (in my case I spawn them per file in a folder), or just having ability to terminate one that was completed, because for now they just 'hanging in there'. what would be the optimal way to implement that? I'm ready to spend a weekend trying to make it right 😁 (I'm a js developer primarily but I know python too, at a 'beginner-intermediate' level 😀 , although so rusty I think already dropped back to 'beginner' since didn't use it in last 4 years).
It sucks when the Getting Started example itself has this issue (crew does not terminate when tasks are finished).
Hey @nileshtrivedi I was not aware of that, we are revamping our docs, but I'll double check that getting started example myself. We also believe to have found the problem with this, seem to be related to some of the way we are using multi threading, so we are working on that as well
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
This issue was closed because it has been stalled for 5 days with no activity.
@joaomdmoura Hey Joao,
We are planning to put crewai in production. Crew ai is a backend for conversational UI. Here we have a stop button and we expect to terminate the backend ongoing Crewai execution once the user hits Stop button. Currently there is no way for us to do it successfully without using multiprocessing (which we dont want to do)
I see that you have mentioned about the scope for improvement in our current multi threading approach. Could you please help us fix this ?
Do you think this will take longer ?
Is there an option to stop the crew execution, rather than ctrl+c from the terminal as this kills the whole app