Closed Balogunolalere closed 2 weeks ago
hey @Balogunolalere can you share the whole error log?
Finished chain. The next big trend in AI in healthcare 2024 will be Personalized Medicine and Predictive Analytics. This trend combines advancements in machine learning and data analysis to provide tailored treatment plans based on individual patient needs, genetic makeup, and medical history. By analyzing vast amounts of medical data, AI can identify patterns and risk factors for various diseases, allowing healthcare professionals to make more informed decisions about prevention, diagnosis, and treatment.
Pros:
Cons:
Market Opportunities: The market for Personalized Medicine and Predictive Analytics is expected to grow significantly over the next decade, driven by advancements in AI technology, increased demand for personalized care, and a growing focus on preventative healthcare. Key players in this space include tech giants like Google, Apple, and IBM, as well as startups focused on specific applications of AI in healthcare.
Potential Risks: While Personalized Medicine and Predictive Analytics have the potential to revolutionize healthcare, there are significant risks associated with their widespread adoption. Data privacy concerns, technical limitations, and socioeconomic inequity could all hinder progress in this field, leading to a slowdown or even backlash against AI-driven personalized medicine.
In conclusion, Personalized Medicine and Predictive Analytics represent the next big trend in AI in healthcare 2024. By combining machine learning with data analysis, this approach has the potential to transform patient outcomes, increase efficiency within the healthcare system, and drive significant market growth. However, it is crucial that we address the potential risks and challenges associated with this trend to ensure equitable access to personalized care and maintain public trust in AI-driven healthcare solutions.
Entering new CrewAgentExecutor chain... Exception in thread Thread-1 (_execute): Traceback (most recent call last): File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/usr/lib/python3.10/threading.py", line 953, in run self._target(*self._args, self.kwargs) File "/home/doombuggy/Projects/crewProj/env/lib/python3.10/site-packages/crewai/task.py", line 142, in _execute result = agent.executetask( File "/home/doombuggy/Projects/crewProj/env/lib/python3.10/site-packages/crewai/agent.py", line 168, in execute_task result = self.agentexecutor.invoke( File "/home/doombuggy/Projects/crewProj/env/lib/python3.10/site-packages/langchain/chains/base.py", line 163, in invoke raise e File "/home/doombuggy_/Projects/crewProj/env/lib/python3.10/site-packages/langchain/chains/base.py", line 153, in invoke self._call(inputs, run_manager=runmanager) File "/home/doombuggy/Projects/crewProj/env/lib/python3.10/site-packages/crewai/agents/executor.py", line 61, in _call next_step_output = self._take_nextstep( File "/home/doombuggy/Projects/crewProj/env/lib/python3.10/site-packages/langchain/agents/agent.py", line 1097, in _take_nextstep [ File "/home/doombuggy/Projects/crewProj/env/lib/python3.10/site-packages/langchain/agents/agent.py", line 1097, in
[ File "/home/doombuggy_/Projects/crewProj/env/lib/python3.10/site-packages/crewai/agents/executor.py", line 108, in _iter_nextstep output = self.agent.plan( File "/home/doombuggy/Projects/crewProj/env/lib/python3.10/site-packages/langchain/agents/agent.py", line 387, in plan for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}): File "/home/doombuggy_/Projects/crewProj/env/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2427, in stream yield from self.transform(iter([input]), config, kwargs) File "/home/doombuggy_/Projects/crewProj/env/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2414, in transform yield from self._transform_stream_withconfig( File "/home/doombuggy/Projects/crewProj/env/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1494, in _transform_stream_withconfig chunk: Output = context.run(next, iterator) # type: ignore File "/home/doombuggy/Projects/crewProj/env/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2378, in _transform for output in finalpipeline: File "/home/doombuggy/Projects/crewProj/env/lib/python3.10/site-packages/langchaincore/runnables/base.py", line 1032, in transform for chunk in input: File "/home/doombuggy/Projects/crewProj/env/lib/python3.10/site-packages/langchaincore/runnables/base.py", line 4164, in transform yield from self.bound.transform( File "/home/doombuggy/Projects/crewProj/env/lib/python3.10/site-packages/langchaincore/runnables/base.py", line 1032, in transform for chunk in input: File "/home/doombuggy/Projects/crewProj/env/lib/python3.10/site-packages/langchaincore/runnables/base.py", line 1032, in transform for chunk in input: File "/home/doombuggy/Projects/crewProj/env/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2794, in transform yield from self._transform_stream_withconfig( File "/home/doombuggy/Projects/crewProj/env/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1494, in _transform_stream_withconfig chunk: Output = context.run(next, iterator) # type: ignore File "/home/doombuggy/Projects/crewProj/env/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2767, in transform futures = { File "/home/doombuggy/Projects/crewProj/env/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2768, inexecutor.submit(next, generator): (stepname, generator) File "/home/doombuggy/Projects/crewProj/env/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 431, in submit return super().submit( File "/usr/lib/python3.10/concurrent/futures/thread.py", line 169, in submit raise RuntimeError('cannot schedule new futures after ' RuntimeError: cannot schedule new futures after interpreter shutdown
+1
I receive the same error in 0.19.0.
+1 Just started running into this all of a sudden.
Little update, I think it's related to using the search_tool
in combination with async_execution=True
, setting async to False helped in my case.
Little update, I think it's related to using the
search_tool
in combination withasync_execution=True
, setting async to False helped in my case.
I concur, same issue here, using async_execution=True with a search_tool. To be specific the "DuckDuckGoSearchRun()". Definining it multiple times for each task does not fix the issue.
+1 getting the same error
Please make sure you are on the latest version of CrewAI which as of now is
crewai 0.28.8 crewai-tools 0.1.7
+1 getting the same error
This happens since we have the option async_execution=True. Though not sure why.
I have the same problem without using search_tool or any other tools. I think it happens because some threads did not finish before the main thread finished. I'm not sure if this is the best way, but in my case it helped:
import threading
starttime = time.time() = crew.kickoff()
while threading.active_count() > 2: time.sleep(1)
end_time = time.time() elapsed_time = round(end_time - start_time, 2) print(f"Total time: {elapsed_time} seconds")
This happens since we have the option async_execution=True. Though not sure why.
Looks to me like the Agent thread does not like to have the tools initialized form another thread. Perhaps @joaomdmoura can shed some light?
I was facing the same issue. But turning of async_execution=False
fixed the issue for me.
I have the same issue - is there a proper fix as setting async_execution=False limits one of the features of CrewAI? I have done the DeepLearning course and get this issue when trying to run the Event Planning code. Running the code in the provided jupyter window works fine (so there must be some set-up that is ok), but running the code locally gives this runtime error.
I have tried this with version 0.1.6 of crewai-tools (as per the course), and version 0.1.7 as suggested by TheCyberTech and version 0.2.6 (the latest version as I write this). Same error each time.
Perhaps the issue is with the Search tool itself - maybe my API key does not permit simultaneous requests while the API key used in the DeepLearning course does?
@DanielM-oz +1 I've had the same experience. Setting async_execution=False
works, but it'd be nice to be able to run tasks in parallel
@alexnodeland is this on the more recent versions? I'll pump this to the top of the list
@alexnodeland is this on the more recent versions? I'll pump this to the top of the list
@joaomdmoura this was for crewai==0.28.8
& crewai_tools==0.1.6
specifically (as per the course). I also tried the other versions mentioned by @DanielM-oz, but no luck there either.
The notebook in the deeplearning.ai course worked, but I couldn't replicate it locally
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
This issue was closed because it has been stalled for 5 days with no activity.