crewAIInc / crewAI

Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
https://crewai.com
MIT License
19k stars 2.62k forks source link

Guys, Why are telemetry being sent to telemetry.crewai.com, and why 2 hours ago did this start happening braking my functional app? #1022

Closed Kingbadger3d closed 1 month ago

Kingbadger3d commented 1 month ago

2024-07-29 14:28:52,212 - 16068 - init.py-init:369 - ERROR: Exception while exporting Span batch. Traceback (most recent call last): File "D:\crewai\Miniconda3\envs\crewai\Lib\site-packages\urllib3\connection.py", line 174, in _new_conn conn = connection.create_connection( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\crewai\Miniconda3\envs\crewai\Lib\site-packages\urllib3\util\connection.py", line 95, in create_connection raise err File "D:\crewai\Miniconda3\envs\crewai\Lib\site-packages\urllib3\util\connection.py", line 85, in create_connection sock.connect(sa) TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\crewai\Miniconda3\envs\crewai\Lib\site-packages\urllib3\connectionpool.py", line 715, in urlopen httplib_response = self._make_request( ^^^^^^^^^^^^^^^^^^^ File "D:\crewai\Miniconda3\envs\crewai\Lib\site-packages\urllib3\connectionpool.py", line 404, in _make_request self._validate_conn(conn) File "D:\crewai\Miniconda3\envs\crewai\Lib\site-packages\urllib3\connectionpool.py", line 1060, in _validate_conn conn.connect() File "D:\crewai\Miniconda3\envs\crewai\Lib\site-packages\urllib3\connection.py", line 363, in connect self.sock = conn = self._new_conn() ^^^^^^^^^^^^^^^^ File "D:\crewai\Miniconda3\envs\crewai\Lib\site-packages\urllib3\connection.py", line 179, in _new_conn raise ConnectTimeoutError( urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x000001F48498C6D0>, 'Connection to telemetry.crewai.com timed out. (connect timeout=30)')

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\crewai\Miniconda3\envs\crewai\Lib\site-packages\requests\adapters.py", line 667, in send resp = conn.urlopen( ^^^^^^^^^^^^^ File "D:\crewai\Miniconda3\envs\crewai\Lib\site-packages\urllib3\connectionpool.py", line 801, in urlopen retries = retries.increment( ^^^^^^^^^^^^^^^^^^ File "D:\crewai\Miniconda3\envs\crewai\Lib\site-packages\urllib3\util\retry.py", line 594, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='telemetry.crewai.com', port=4319): Max retries exceeded with url: /v1/traces (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x000001F48498C6D0>, 'Connection to telemetry.crewai.com timed out. (connect timeout=30)'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\crewai\Miniconda3\envs\crewai\Lib\site-packages\opentelemetry\sdk\trace\export__init.py", line 367, in _export_batch self.span_exporter.export(self.spans_list[:idx]) # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\crewai\Miniconda3\envs\crewai\Lib\site-packages\opentelemetry\exporter\otlp\proto\http\trace_exporter__init__.py", line 169, in export return self._export_serialized_spans(serialized_data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\crewai\Miniconda3\envs\crewai\Lib\site-packages\opentelemetry\exporter\otlp\proto\http\trace_exporter\init.py", line 139, in _export_serialized_spans resp = self._export(serialized_data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\crewai\Miniconda3\envs\crewai\Lib\site-packages\opentelemetry\exporter\otlp\proto\http\trace_exporter\init__.py", line 114, in _export return self._session.post( ^^^^^^^^^^^^^^^^^^^ File "D:\crewai\Miniconda3\envs\crewai\Lib\site-packages\requests\sessions.py", line 637, in post return self.request("POST", url, data=data, json=json, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\crewai\Miniconda3\envs\crewai\Lib\site-packages\requests\sessions.py", line 589, in request resp = self.send(prep, send_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\crewai\Miniconda3\envs\crewai\Lib\site-packages\requests\sessions.py", line 703, in send r = adapter.send(request, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\crewai\Miniconda3\envs\crewai\Lib\site-packages\requests\adapters.py", line 688, in send raise ConnectTimeout(e, request=request) requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='telemetry.crewai.com', port=4319): Max retries exceeded with url: /v1/traces (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x000001F48498C6D0>, 'Connection to telemetry.crewai.com timed out. (connect timeout=30)'))

TusharP05 commented 1 month ago

Same here I am facing the same issue. Any solution??

Kingbadger3d commented 1 month ago

@TusharP05 Ive added import os

Disable CrewAI telemetry

os.environ['CREWAI_DISABLE_TELEMETRY'] = 'true'

and

set CREWAI_DISABLE_TELEMETRY=true at the top of the code, hacky, doesnt fix anything but does allow the agent to still generate results, but still trips on the error every now and then. CrewAI Devs need to do better.

other code breaking changes have happened today that dont even let the app start, so dont git pull latest right now.

TusharP05 commented 1 month ago

@Kingbadger3d but the results are half -cooked or stay incomplete.

Kingbadger3d commented 1 month ago

yep

TusharP05 commented 1 month ago

@Kingbadger3d are your outputs complete?

Kingbadger3d commented 1 month ago

Some complete, then sometimes it just errors out, im not normaly this demanding but I was about to have a vid call with my boss of a prototype were working on, and hey presto I look like a nob when I run the thing and errors out :), Brilliant! Lol

TusharP05 commented 1 month ago

this is annoying tbh! I have informed the co-founder about it on X!

TusharP05 commented 1 month ago

@Kingbadger3d btw which LLM are you using?

joaomdmoura commented 1 month ago

Sorry folks, it was an error on our end we fixed it and also are shipping a fix on the library so it doesn't happen again

Kingbadger3d commented 1 month ago

Ollama, Mistral_Nemo, ArceeAgent, Qwen2_5, some others. Re wrote the whole thing yesterdaym had it working brillm could paste 100 product names to research and it would bang them out 1 after the other. Its stuff like this that puts user devs off from a framework, if the Devs arnt even bother to check code works before updating github im not cool with that as Ill always be stressed out not knowing if the things going to crap out on us at the wrong time. Im already porting the code to other Agent tools as a backup. Surprised me as all over their website they talk about enterprise customers etc etc, this is not how Enterprise code should work.

Kingbadger3d commented 1 month ago

@joaomdmoura , Thank you. Aprreciated.

TusharP05 commented 1 month ago

Ollama, Mistral_Nemo, ArceeAgent, Qwen2_5, some others. Re wrote the whole thing yesterdaym had it working brillm could paste 100 product names to research and it would bang them out 1 after the other. Its stuff like this that puts user devs off from a framework, if the Devs arnt even bother to check code works before updating github im not cool with that as Ill always be stressed out not knowing if the things going to crap out on us at the wrong time. Im already porting the code to other Agent tools as a backup. Surprised me as all over their website they talk about enterprise customers etc etc, this is not how Enterprise code should work.

Just wanted to ask can I use multiple LLMs in the same crew AI app for different agents??

zinyando commented 1 month ago

Ollama, Mistral_Nemo, ArceeAgent, Qwen2_5, some others. Re wrote the whole thing yesterdaym had it working brillm could paste 100 product names to research and it would bang them out 1 after the other. Its stuff like this that puts user devs off from a framework, if the Devs arnt even bother to check code works before updating github im not cool with that as Ill always be stressed out not knowing if the things going to crap out on us at the wrong time. Im already porting the code to other Agent tools as a backup. Surprised me as all over their website they talk about enterprise customers etc etc, this is not how Enterprise code should work.

Just wanted to ask can I use multiple LLMs in the same crew AI app for different agents??

@TusharP05 yes you can. Agents have an optional 'llm' attribute that you can pass in. This means if you have 5 agents you can assign each one a different LLM. See the agent docs for more 🙂

TusharP05 commented 1 month ago

@zinyando does that mean I can delegate my requests to different LLMs and it will help me in managing the rate limiting errors?

Kingbadger3d commented 1 month ago

@zinyando , Yep Budd.

Ive set it up as a test (Of I use this tool long term ill write a dunction to search the code and find the number of agents and assign model drop down controls per agent so I dont have to keep hard coding it.

If i dont click the checkbox it defaults to OpenAI, If click open models checkbox, it takes the model assignment from Ollama and the list of Local models ive generated as a drop down selection list, could make this per agent if needed, make sure to set the max models number allowed in the Ollama API pass through with lang chain community.

If you need more help budd, ping me.

TusharP05 commented 1 month ago

@Kingbadger3d thanks, where to ping you?

Kingbadger3d commented 1 month ago

I dont use github often, I thought you could do it on here, if not ill make a burner email and post it. READ the Docs, or as i would put it RTFM hahaha.