crewAIInc / crewAI

Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
https://crewai.com
MIT License
20.79k stars 2.87k forks source link

Stop the crew execution #374

Closed abdarwish23 closed 2 months ago

abdarwish23 commented 7 months ago

Is there an option to stop the crew execution, rather than ctrl+c from the terminal as this kills the whole app

remichu-ai commented 7 months ago

i am wondering the same

PeterHiroshi commented 7 months ago

Is there an option to stop the crew execution, rather than ctrl+c from the terminal as this kills the whole app

I solved it by starting a new process for the crew execution, and terminating the process when it needed to be stopped.

import multiprocessing

class MyCrew:

    def __init__(self, crew: Crew) -> None:
        self.process: multiprocessing.Process = None
        self.crew = crew

    def _execution_job(self):
        self.crew.kickoff()

    def execute(self) -> None:
        if self.process is not None and self.process.is_alive():
            # Check if there is already an active process, if so it should be terminated
            self.process.terminate()
        self.process = multiprocessing.Process(
            target=self._execution_job,
        )
        self.process.start()

    def stop(self) -> None:
        if self.process is not None and self.process.is_alive():
            self.process.terminate()

I'm not sure if this is a good way

aliensouls commented 7 months ago

there are 34 PRs which are not merged, people should fork the project IMHO, the maintainers possibly do not have enough time to take care of such an intensive buzz around the project, I hope we can fork it and start merging this. but that's a lot of work, maybe some organization could take care of this 🧐 . the idea and concept is great, but without enough resources to review and merge suggestions - it goes nowhere.

aliensouls commented 7 months ago

Is there an option to stop the crew execution, rather than ctrl+c from the terminal as this kills the whole app

I solved it by starting a new process for the crew execution, and terminating the process when it needed to be stopped.

import multiprocessing

class MyCrew:

    def __init__(self, crew: Crew) -> None:
        self.process: multiprocessing.Process = None
        self.crew = crew

    def _execution_job(self):
        self.crew.kickoff()

    def execute(self) -> None:
        if self.process is not None and self.process.is_alive():
            # Check if there is already an active process, if so it should be terminated
            self.process.terminate()
        self.process = multiprocessing.Process(
            target=self._execution_job,
        )
        self.process.start()

    def stop(self) -> None:
        if self.process is not None and self.process.is_alive():
            self.process.terminate()

I'm not sure if this is a good way

in your example, at what point do you know it's time to call the 'stop' for it? what is the signal which indicates completion? I guess will be using your approach because there's no alternative at the moment, the crew gets stuck but I need to run them one by one over a folder of files with content 😓 have no idea how to make it iterate if I don't know at which points it completes or 'still thinks', there has to be some event dispatched and subscribe to that event - are there events like that in python?

aliensouls commented 7 months ago

so a "step_callback" of the crew object itself, might be it, will try that :P

PeterHiroshi commented 7 months ago

so a "step_callback" of the crew object itself, might be it, will try that :P

Good idea, I am currently using step_callbak, which will pass a signal to the main process after the last step is completed.

PeterHiroshi commented 7 months ago

Is there an option to stop the crew execution, rather than ctrl+c from the terminal as this kills the whole app

I solved it by starting a new process for the crew execution, and terminating the process when it needed to be stopped.

import multiprocessing

class MyCrew:

    def __init__(self, crew: Crew) -> None:
        self.process: multiprocessing.Process = None
        self.crew = crew

    def _execution_job(self):
        self.crew.kickoff()

    def execute(self) -> None:
        if self.process is not None and self.process.is_alive():
            # Check if there is already an active process, if so it should be terminated
            self.process.terminate()
        self.process = multiprocessing.Process(
            target=self._execution_job,
        )
        self.process.start()

    def stop(self) -> None:
        if self.process is not None and self.process.is_alive():
            self.process.terminate()

I'm not sure if this is a good way

in your example, at what point do you know it's time to call the 'stop' for it? what is the signal which indicates completion? I guess will be using your approach because there's no alternative at the moment, the crew gets stuck but I need to run them one by one over a folder of files with content 😓 have no idea how to make it iterate if I don't know at which points it completes or 'still thinks', there has to be some event dispatched and subscribe to that event - are there events like that in python?

I currently use multiprocessing.Queue() to pass a signal at the last step of the crew to the main process. I'm thinking about using the event loop in Asyncio to solve it. Maybe that way would be better.

aliensouls commented 7 months ago

Is there an option to stop the crew execution, rather than ctrl+c from the terminal as this kills the whole app

I solved it by starting a new process for the crew execution, and terminating the process when it needed to be stopped.

import multiprocessing

class MyCrew:

    def __init__(self, crew: Crew) -> None:
        self.process: multiprocessing.Process = None
        self.crew = crew

    def _execution_job(self):
        self.crew.kickoff()

    def execute(self) -> None:
        if self.process is not None and self.process.is_alive():
            # Check if there is already an active process, if so it should be terminated
            self.process.terminate()
        self.process = multiprocessing.Process(
            target=self._execution_job,
        )
        self.process.start()

    def stop(self) -> None:
        if self.process is not None and self.process.is_alive():
            self.process.terminate()

I'm not sure if this is a good way

in your example, at what point do you know it's time to call the 'stop' for it? what is the signal which indicates completion? I guess will be using your approach because there's no alternative at the moment, the crew gets stuck but I need to run them one by one over a folder of files with content 😓 have no idea how to make it iterate if I don't know at which points it completes or 'still thinks', there has to be some event dispatched and subscribe to that event - are there events like that in python?

I currently use multiprocessing.Queue() to pass a signal at the last step of the crew to the main process. I'm thinking about using the event loop in Asyncio to solve it. Maybe that way would be better.

you mean in the callback of the last agent? (it'll work only in sequential mode, correct?)

PeterHiroshi commented 7 months ago

Is there an option to stop the crew execution, rather than ctrl+c from the terminal as this kills the whole app

I solved it by starting a new process for the crew execution, and terminating the process when it needed to be stopped.

import multiprocessing

class MyCrew:

    def __init__(self, crew: Crew) -> None:
        self.process: multiprocessing.Process = None
        self.crew = crew

    def _execution_job(self):
        self.crew.kickoff()

    def execute(self) -> None:
        if self.process is not None and self.process.is_alive():
            # Check if there is already an active process, if so it should be terminated
            self.process.terminate()
        self.process = multiprocessing.Process(
            target=self._execution_job,
        )
        self.process.start()

    def stop(self) -> None:
        if self.process is not None and self.process.is_alive():
            self.process.terminate()

I'm not sure if this is a good way

in your example, at what point do you know it's time to call the 'stop' for it? what is the signal which indicates completion? I guess will be using your approach because there's no alternative at the moment, the crew gets stuck but I need to run them one by one over a folder of files with content 😓 have no idea how to make it iterate if I don't know at which points it completes or 'still thinks', there has to be some event dispatched and subscribe to that event - are there events like that in python?

I currently use multiprocessing.Queue() to pass a signal at the last step of the crew to the main process. I'm thinking about using the event loop in Asyncio to solve it. Maybe that way would be better.

you mean in the callback of the last agent? (it'll work only in sequential mode, correct?)

Yeah, but if it is not sure which is the last agent when using the hierarchical mode, you can consider adding some post-processing to pass the signal after calling crew.kickoff()

joaomdmoura commented 7 months ago

hey folks, just stopping by to let you all know we will be catchin up to this, this week, we now have more people on our team helping out, so we should be able to better catch up to PRs :)

Em seg., 25 de mar. de 2024 às 01:19, Peter Ma @.***> escreveu:

Is there an option to stop the crew execution, rather than ctrl+c from the terminal as this kills the whole app

I solved it by starting a new process for the crew execution, and terminating the process when it needed to be stopped.

import multiprocessing

class MyCrew:

def __init__(self, crew: Crew) -> None:
    self.process: multiprocessing.Process = None
    self.crew = crew

def _execution_job(self):
    self.crew.kickoff()

def execute(self) -> None:
    if self.process is not None and self.process.is_alive():
        # Check if there is already an active process, if so it should be terminated
        self.process.terminate()
    self.process = multiprocessing.Process(
        target=self._execution_job,
    )
    self.process.start()

def stop(self) -> None:
    if self.process is not None and self.process.is_alive():
        self.process.terminate()

I'm not sure if this is a good way

in your example, at what point do you know it's time to call the 'stop' for it? what is the signal which indicates completion? I guess will be using your approach because there's no alternative at the moment, the crew gets stuck but I need to run them one by one over a folder of files with content 😓 have no idea how to make it iterate if I don't know at which points it completes or 'still thinks', there has to be some event dispatched and subscribe to that event - are there events like that in python?

I currently use multiprocessing.Queue() to pass a signal at the last step of the crew to the main process. I'm thinking about using the event loop in Asyncio to solve it. Maybe that way would be better.

you mean in the callback of the last agent? (it'll work only in sequential mode, correct?)

Yeah, but if it is not sure which is the last agent when using the hierarchical mode, you can consider adding some post-processing to pass the signal after calling crew.kickoff()

— Reply to this email directly, view it on GitHub https://github.com/joaomdmoura/crewAI/issues/374#issuecomment-2017181657, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFC3N6GHICTXRM26H2V3C3YZ6QWVAVCNFSM6AAAAABFEMI4VSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJXGE4DCNRVG4 . You are receiving this because you are subscribed to this thread.Message ID: @.***>

aliensouls commented 7 months ago

hey folks, just stopping by to let you all know we will be catchin up to this, this week, we now have more people on our team helping out, so we should be able to better catch up to PRs :) Em seg., 25 de mar. de 2024 às 01:19, Peter Ma @.***> escreveu:

hey great to hear from you, so the project is not abandoned yet. I saw many issue threads with no responses and PRs etc' so thought it's being left to rot lol 😅, while the concept is great. I'll be reporting some bugs, and I also keep telemetry on, hope it helps debugging etc.

joaomdmoura commented 7 months ago

haha not abandoned at all, sorry if it felt that way, we are actually a team of 4 now :D just that we have been pushing more stuff that was already on the roadmap before PRs, but will be closing the loop on these now

Em seg., 25 de mar. de 2024 às 07:25, aliensouls @.***> escreveu:

hey folks, just stopping by to let you all know we will be catchin up to this, this week, we now have more people on our team helping out, so we should be able to better catch up to PRs :) Em seg., 25 de mar. de 2024 às 01:19, Peter Ma @.***> escreveu:

hey great to hear from you, so the project is not abandoned yet. I saw many issue threads with no responses and PRs etc' so thought it's being left to rot lol 😅, while the concept is great. I'll be reporting some bugs, and I also keep telemetry on, hope it helps debugging etc.

— Reply to this email directly, view it on GitHub https://github.com/joaomdmoura/crewAI/issues/374#issuecomment-2017673018, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFC3N4VIGYI32R3BRY7SWLYZ73RFAVCNFSM6AAAAABFEMI4VSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJXGY3TGMBRHA . You are receiving this because you commented.Message ID: @.***>

PeterHiroshi commented 7 months ago

haha not abandoned at all, sorry if it felt that way, we are actually a team of 4 now :D just that we have been pushing more stuff that was already on the roadmap before PRs, but will be closing the loop on these now Em seg., 25 de mar. de 2024 às 07:25, aliensouls @.> escreveu: hey folks, just stopping by to let you all know we will be catchin up to this, this week, we now have more people on our team helping out, so we should be able to better catch up to PRs :) Em seg., 25 de mar. de 2024 às 01:19, Peter Ma @.> escreveu: hey great to hear from you, so the project is not abandoned yet. I saw many issue threads with no responses and PRs etc' so thought it's being left to rot lol 😅, while the concept is great. I'll be reporting some bugs, and I also keep telemetry on, hope it helps debugging etc. — Reply to this email directly, view it on GitHub <#374 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFC3N4VIGYI32R3BRY7SWLYZ73RFAVCNFSM6AAAAABFEMI4VSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJXGY3TGMBRHA . You are receiving this because you commented.Message ID: @.***>

I'm thrilled to hear about that. CrewAI has been incredibly helpful, and I'm genuinely grateful for its value. I'm excited about the potential for more functionality in the future. Keep up the fantastic work!

aliensouls commented 7 months ago

haha not abandoned at all, sorry if it felt that way, we are actually a team of 4 now :D just that we have been pushing more stuff that was already on the roadmap before PRs, but will be closing the loop on these now Em seg., 25 de mar. de 2024 às 07:25, aliensouls @.> escreveu: hey folks, just stopping by to let you all know we will be catchin up to this, this week, we now have more people on our team helping out, so we should be able to better catch up to PRs :) Em seg., 25 de mar. de 2024 às 01:19, Peter Ma @.> escreveu: hey great to hear from you, so the project is not abandoned yet. I saw many issue threads with no responses and PRs etc' so thought it's being left to rot lol 😅, while the concept is great. I'll be reporting some bugs, and I also keep telemetry on, hope it helps debugging etc. — Reply to this email directly, view it on GitHub <#374 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFC3N4VIGYI32R3BRY7SWLYZ73RFAVCNFSM6AAAAABFEMI4VSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJXGY3TGMBRHA . You are receiving this because you commented.Message ID: @.***>

I'd like to PR the functionality we're discussing here about how to handle many 'crews' that are spawned (in my case I spawn them per file in a folder), or just having ability to terminate one that was completed, because for now they just 'hanging in there'. what would be the optimal way to implement that? I'm ready to spend a weekend trying to make it right 😁 (I'm a js developer primarily but I know python too, at a 'beginner-intermediate' level 😀 , although so rusty I think already dropped back to 'beginner' since didn't use it in last 4 years).

nileshtrivedi commented 6 months ago

It sucks when the Getting Started example itself has this issue (crew does not terminate when tasks are finished).

joaomdmoura commented 6 months ago

Hey @nileshtrivedi I was not aware of that, we are revamping our docs, but I'll double check that getting started example myself. We also believe to have found the problem with this, seem to be related to some of the way we are using multi threading, so we are working on that as well

github-actions[bot] commented 2 months ago

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] commented 2 months ago

This issue was closed because it has been stalled for 5 days with no activity.

Sarangk90 commented 1 month ago

@joaomdmoura Hey Joao,

We are planning to put crewai in production. Crew ai is a backend for conversational UI. Here we have a stop button and we expect to terminate the backend ongoing Crewai execution once the user hits Stop button. Currently there is no way for us to do it successfully without using multiprocessing (which we dont want to do)

I see that you have mentioned about the scope for improvement in our current multi threading approach. Could you please help us fix this ?

Do you think this will take longer ?