crewAIInc / crewAI

Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
https://crewai.com
MIT License
18.49k stars 2.54k forks source link

UnicodeEncodeError when running crew.kickoff #755

Closed rponesoul closed 2 months ago

rponesoul commented 2 months ago

I'm running Jupyter Lab on macOS 12.7.4 and trying to run the code in the deeplearning.ai course Multi AI Agent Systems with crewAI.

Editorial note: I was invited to and attended yesterday a thought-leaders' summit on AI & Business Agility that included a lot of influential people, including a signer of the Agile Manifesto. Very early on, the organizer, Peter Merel, in his keynote, mentioned Multi Agent AI and later when I brought it up in a breakout, I would have mentioned crewAI if I had it working. There is a lot of opportunity for me to showcase crewAI and I need help getting it to work.

I used the following command to install crewai, per the course instructions:

!pip install crewai==0.28.8 crewai_tools==0.1.6 langchain_community==0.0.29

Below is the code I ran to get to the error when running crew.kickoff

import warnings warnings.filterwarnings('ignore')

from crewai import Agent, Task, Crew

import os from utils import get_openai_api_key

openai_api_key = get_openai_api_key() os.environ["OPENAI_MODEL_NAME"] = 'gpt-3.5-turbo'

planner = Agent( role="Content Planner", goal="Plan engaging and factually accurate content on {topic}", backstory="You're working on planning a blog article " "about the topic: {topic}." "You collect information that helps the " "audience learn something " "and make informed decisions. " "Your work is the basis for " "the Content Writer to write an article on this topic.", allow_delegation=False, verbose=True

writer = Agent( role="Content Writer", goal="Write insightful and factually accurate " "opinion piece about the topic: {topic}", backstory="You're working on a writing " "a new opinion piece about the topic: {topic}. " "You base your writing on the work of " "the Content Planner, who provides an outline " "and relevant context about the topic. " "You follow the main objectives and " "direction of the outline, " "as provide by the Content Planner. " "You also provide objective and impartial insights " "and back them up with information " "provide by the Content Planner. " "You acknowledge in your opinion piece " "when your statements are opinions " "as opposed to objective statements.", allow_delegation=False, verbose=True

editor = Agent( role="Editor", goal="Edit a given blog post to align with " "the writing style of the organization. ", backstory="You are an editor who receives a blog post " "from the Content Writer. " "Your goal is to review the blog post " "to ensure that it follows journalistic best practices," "provides balanced viewpoints " "when providing opinions or assertions, " "and also avoids major controversial topics " "or opinions when possible.", allow_delegation=False, verbose=True )

plan = Task( description=( "1. Prioritize the latest trends, key players, " "and noteworthy news on {topic}.\n" "2. Identify the target audience, considering " "their interests and pain points.\n" "3. Develop a detailed content outline including " "an introduction, key points, and a call to action.\n" "4. Include SEO keywords and relevant data or sources." ), expected_output="A comprehensive content plan document " "with an outline, audience analysis, " "SEO keywords, and resources.", agent=planner, )

write = Task( description=( "1. Use the content plan to craft a compelling " "blog post on {topic}.\n" "2. Incorporate SEO keywords naturally.\n" "3. Sections/Subtitles are properly named " "in an engaging manner.\n" "4. Ensure the post is structured with an " "engaging introduction, insightful body, " "and a summarizing conclusion.\n" "5. Proofread for grammatical errors and " "alignment with the brand's voice.\n" ), expected_output="A well-written blog post " "in markdown format, ready for publication, " "each section should have 2 or 3 paragraphs.", agent=writer, )

edit = Task( description=("Proofread the given blog post for " "grammatical errors and " "alignment with the brand's voice."), expected_output="A well-written blog post in markdown format, " "ready for publication, " "each section should have 2 or 3 paragraphs.", agent=editor )

crew = Crew( agents=[planner, writer, editor], tasks=[plan, write, edit], verbose=2 )

result = crew.kickoff(inputs={"topic": "Artificial Intelligence"})

Below is the output and Traceback from running crew.kickoff. Help on this error would be greatly appreciated.

[DEBUG]: == Working Agent: Content Planner [INFO]: == Starting Task: 1. Prioritize the latest trends, key players, and noteworthy news on Artificial Intelligence.

  1. Identify the target audience, considering their interests and pain points.
  2. Develop a detailed content outline including an introduction, key points, and a call to action.
  3. Include SEO keywords and relevant data or sources.

Entering new CrewAgentExecutor chain...


UnicodeEncodeError Traceback (most recent call last) Cell In[11], line 1 ----> 1 result = crew.kickoff(inputs={"topic": "Artificial Intelligence"})

File /opt/anaconda3/lib/python3.11/site-packages/crewai/crew.py:252, in Crew.kickoff(self, inputs) 249 metrics = [] 251 if self.process == Process.sequential: --> 252 result = self._run_sequential_process() 253 elif self.process == Process.hierarchical: 254 result, manager_metrics = self._run_hierarchical_process()

File /opt/anaconda3/lib/python3.11/site-packages/crewai/crew.py:293, in Crew._run_sequential_process(self) 288 if self.output_log_file: 289 self._file_handler.log( 290 agent=role, task=task.description, status="started" 291 ) --> 293 output = task.execute(context=task_output) 294 if not task.async_execution: 295 task_output = output

File /opt/anaconda3/lib/python3.11/site-packages/crewai/task.py:173, in Task.execute(self, agent, context, tools) 171 self.thread.start() 172 else: --> 173 result = self._execute( 174 task=self, 175 agent=agent, 176 context=context, 177 tools=tools, 178 ) 179 return result

File /opt/anaconda3/lib/python3.11/site-packages/crewai/task.py:182, in Task._execute(self, agent, task, context, tools) 181 def _execute(self, agent, task, context, tools): --> 182 result = agent.execute_task( 183 task=task, 184 context=context, 185 tools=tools, 186 ) 188 exported_output = self._export_output(result) 190 self.output = TaskOutput( 191 description=self.description, 192 exported_output=exported_output, 193 raw_output=result, 194 )

File /opt/anaconda3/lib/python3.11/site-packages/crewai/agent.py:221, in Agent.execute_task(self, task, context, tools) 218 self.agent_executor.tools_description = render_text_description(parsed_tools) 219 self.agent_executor.tools_names = self.__tools_names(parsed_tools) --> 221 result = self.agent_executor.invoke( 222 { 223 "input": task_prompt, 224 "tool_names": self.agent_executor.tools_names, 225 "tools": self.agent_executor.tools_description, 226 } 227 )["output"] 229 if self.max_rpm: 230 self._rpm_controller.stop_rpm_counter()

File /opt/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:163, in Chain.invoke(self, input, config, **kwargs) 161 except BaseException as e: 162 run_manager.on_chain_error(e) --> 163 raise e 164 run_manager.on_chain_end(outputs) 166 if include_run_info:

File /opt/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:153, in Chain.invoke(self, input, config, **kwargs) 150 try: 151 self._validate_inputs(inputs) 152 outputs = ( --> 153 self._call(inputs, run_manager=run_manager) 154 if new_arg_supported 155 else self._call(inputs) 156 ) 158 final_outputs: Dict[str, Any] = self.prep_outputs( 159 inputs, outputs, return_only_outputs 160 ) 161 except BaseException as e:

File /opt/anaconda3/lib/python3.11/site-packages/crewai/agents/executor.py:124, in CrewAgentExecutor._call(self, inputs, run_manager) 122 while self._should_continue(self.iterations, time_elapsed): 123 if not self.request_within_rpm_limit or self.request_within_rpm_limit(): --> 124 next_step_output = self._take_next_step( 125 name_to_tool_map, 126 color_mapping, 127 inputs, 128 intermediate_steps, 129 run_manager=run_manager, 130 ) 131 if self.step_callback: 132 self.step_callback(next_step_output)

File /opt/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py:1138, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager) 1129 def _take_next_step( 1130 self, 1131 name_to_tool_map: Dict[str, BaseTool], (...) 1135 run_manager: Optional[CallbackManagerForChainRun] = None, 1136 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]: 1137 return self._consume_next_step( -> 1138 [ 1139 a 1140 for a in self._iter_next_step( 1141 name_to_tool_map, 1142 color_mapping, 1143 inputs, 1144 intermediate_steps, 1145 run_manager, 1146 ) 1147 ] 1148 )

File /opt/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py:1138, in (.0) 1129 def _take_next_step( 1130 self, 1131 name_to_tool_map: Dict[str, BaseTool], (...) 1135 run_manager: Optional[CallbackManagerForChainRun] = None, 1136 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]: 1137 return self._consume_next_step( -> 1138 [ 1139 a 1140 for a in self._iter_next_step( 1141 name_to_tool_map, 1142 color_mapping, 1143 inputs, 1144 intermediate_steps, 1145 run_manager, 1146 ) 1147 ] 1148 )

File /opt/anaconda3/lib/python3.11/site-packages/crewai/agents/executor.py:186, in CrewAgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager) 183 intermediate_steps = self._prepare_intermediate_steps(intermediate_steps) 185 # Call the LLM to see what to do. --> 186 output = self.agent.plan( 187 intermediate_steps, 188 callbacks=run_manager.get_child() if run_manager else None, 189 **inputs, 190 ) 192 except OutputParserException as e: 193 if isinstance(self.handle_parsing_errors, bool):

File /opt/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py:397, in RunnableAgent.plan(self, intermediate_steps, callbacks, **kwargs) 389 final_output: Any = None 390 if self.stream_runnable: 391 # Use streaming to make sure that the underlying LLM is invoked in a 392 # streaming (...) 395 # Because the response from the plan is not a generator, we need to 396 # accumulate the output into final output and return that. --> 397 for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}): 398 if final_output is None: 399 final_output = chunk

File /opt/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2875, in RunnableSequence.stream(self, input, config, kwargs) 2869 def stream( 2870 self, 2871 input: Input, 2872 config: Optional[RunnableConfig] = None, 2873 kwargs: Optional[Any], 2874 ) -> Iterator[Output]: -> 2875 yield from self.transform(iter([input]), config, **kwargs)

File /opt/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2862, in RunnableSequence.transform(self, input, config, kwargs) 2856 def transform( 2857 self, 2858 input: Iterator[Input], 2859 config: Optional[RunnableConfig] = None, 2860 kwargs: Optional[Any], 2861 ) -> Iterator[Output]: -> 2862 yield from self._transform_stream_with_config( 2863 input, 2864 self._transform, 2865 patch_config(config, run_name=(config or {}).get("run_name") or self.name), 2866 **kwargs, 2867 )

File /opt/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1881, in Runnable._transform_stream_with_config(self, input, transformer, config, run_type, **kwargs) 1879 try: 1880 while True: -> 1881 chunk: Output = context.run(next, iterator) # type: ignore 1882 yield chunk 1883 if final_output_supported:

File /opt/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2826, in RunnableSequence._transform(self, input, run_manager, config) 2817 for step in steps: 2818 final_pipeline = step.transform( 2819 final_pipeline, 2820 patch_config( (...) 2823 ), 2824 ) -> 2826 for output in final_pipeline: 2827 yield output

File /opt/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1282, in Runnable.transform(self, input, config, **kwargs) 1279 final: Input 1280 got_first_val = False -> 1282 for ichunk in input: 1283 # The default implementation of transform is to buffer input and 1284 # then call stream. 1285 # It'll attempt to gather all input into a single chunk using 1286 # the + operator. 1287 # If the input is not addable, then we'll assume that we can 1288 # only operate on the last chunk, 1289 # and we'll iterate until we get to the last chunk. 1290 if not got_first_val: 1291 final = ichunk

File /opt/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:4736, in RunnableBindingBase.transform(self, input, config, kwargs) 4730 def transform( 4731 self, 4732 input: Iterator[Input], 4733 config: Optional[RunnableConfig] = None, 4734 kwargs: Any, 4735 ) -> Iterator[Output]: -> 4736 yield from self.bound.transform( 4737 input, 4738 self._merge_configs(config), 4739 {self.kwargs, **kwargs}, 4740 )

File /opt/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1300, in Runnable.transform(self, input, config, kwargs) 1297 final = ichunk 1299 if got_first_val: -> 1300 yield from self.stream(final, config, kwargs)

File /opt/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:249, in BaseChatModel.stream(self, input, config, stop, **kwargs) 242 except BaseException as e: 243 run_manager.on_llm_error( 244 e, 245 response=LLMResult( 246 generations=[[generation]] if generation else [] 247 ), 248 ) --> 249 raise e 250 else: 251 run_manager.on_llm_end(LLMResult(generations=[[generation]]))

File /opt/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:229, in BaseChatModel.stream(self, input, config, stop, kwargs) 227 generation: Optional[ChatGenerationChunk] = None 228 try: --> 229 for chunk in self._stream(messages, stop=stop, kwargs): 230 if chunk.message.id is None: 231 chunk.message.id = f"run-{run_manager.run_id}"

File /opt/anaconda3/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:408, in ChatOpenAI._stream(self, messages, stop, run_manager, kwargs) 405 params = {params, kwargs, "stream": True} 407 default_chunk_class = AIMessageChunk --> 408 for chunk in self.client.create(messages=message_dicts, params): 409 if not isinstance(chunk, dict): 410 chunk = chunk.dict()

File /opt/anaconda3/lib/python3.11/site-packages/openai/_utils/_utils.py:277, in required_args..inner..wrapper(*args, *kwargs) 275 msg = f"Missing required argument: {quote(missing[0])}" 276 raise TypeError(msg) --> 277 return func(args, **kwargs)

File /opt/anaconda3/lib/python3.11/site-packages/openai/resources/chat/completions.py:606, in Completions.create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_tokens, n, parallel_tool_calls, presence_penalty, response_format, seed, stop, stream, stream_options, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout) 573 @required_args(["messages", "model"], ["messages", "model", "stream"]) 574 def create( 575 self, (...) 604 timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN, 605 ) -> ChatCompletion | Stream[ChatCompletionChunk]: --> 606 return self._post( 607 "/chat/completions", 608 body=maybe_transform( 609 { 610 "messages": messages, 611 "model": model, 612 "frequency_penalty": frequency_penalty, 613 "function_call": function_call, 614 "functions": functions, 615 "logit_bias": logit_bias, 616 "logprobs": logprobs, 617 "max_tokens": max_tokens, 618 "n": n, 619 "parallel_tool_calls": parallel_tool_calls, 620 "presence_penalty": presence_penalty, 621 "response_format": response_format, 622 "seed": seed, 623 "stop": stop, 624 "stream": stream, 625 "stream_options": stream_options, 626 "temperature": temperature, 627 "tool_choice": tool_choice, 628 "tools": tools, 629 "top_logprobs": top_logprobs, 630 "top_p": top_p, 631 "user": user, 632 }, 633 completion_create_params.CompletionCreateParams, 634 ), 635 options=make_request_options( 636 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout 637 ), 638 cast_to=ChatCompletion, 639 stream=stream or False, 640 stream_cls=Stream[ChatCompletionChunk], 641 )

File /opt/anaconda3/lib/python3.11/site-packages/openai/_base_client.py:1240, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls) 1226 def post( 1227 self, 1228 path: str, (...) 1235 stream_cls: type[_StreamT] | None = None, 1236 ) -> ResponseT | _StreamT: 1237 opts = FinalRequestOptions.construct( 1238 method="post", url=path, json_data=body, files=to_httpx_files(files), **options 1239 ) -> 1240 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))

File /opt/anaconda3/lib/python3.11/site-packages/openai/_base_client.py:921, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls) 912 def request( 913 self, 914 cast_to: Type[ResponseT], (...) 919 stream_cls: type[_StreamT] | None = None, 920 ) -> ResponseT | _StreamT: --> 921 return self._request( 922 cast_to=cast_to, 923 options=options, 924 stream=stream, 925 stream_cls=stream_cls, 926 remaining_retries=remaining_retries, 927 )

File /opt/anaconda3/lib/python3.11/site-packages/openai/_base_client.py:942, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls) 939 self._prepare_options(options) 941 retries = self._remaining_retries(remaining_retries, options) --> 942 request = self._build_request(options) 943 self._prepare_request(request) 945 kwargs: HttpxSendArgs = {}

File /opt/anaconda3/lib/python3.11/site-packages/openai/_base_client.py:459, in BaseClient._build_request(self, options) 456 else: 457 raise RuntimeError(f"Unexpected JSON data type, {type(json_data)}, cannot merge with extra_body") --> 459 headers = self._build_headers(options) 460 params = _merge_mappings(self._custom_query, options.params) 461 content_type = headers.get("Content-Type")

File /opt/anaconda3/lib/python3.11/site-packages/openai/_base_client.py:417, in BaseClient._build_headers(self, options) 414 self._validate_headers(headers_dict, custom_headers) 416 # headers are case-insensitive while dictionaries are not. --> 417 headers = httpx.Headers(headers_dict) 419 idempotency_header = self._idempotency_header 420 if idempotency_header and options.method.lower() != "get" and idempotency_header not in headers:

File /opt/anaconda3/lib/python3.11/site-packages/httpx/_models.py:72, in Headers.init(self, headers, encoding) 70 self._list = list(headers._list) 71 elif isinstance(headers, Mapping): ---> 72 self._list = [ 73 ( 74 normalize_header_key(k, lower=False, encoding=encoding), 75 normalize_header_key(k, lower=True, encoding=encoding), 76 normalize_header_value(v, encoding), 77 ) 78 for k, v in headers.items() 79 ] 80 else: 81 self._list = [ 82 ( 83 normalize_header_key(k, lower=False, encoding=encoding), (...) 87 for k, v in headers 88 ]

File /opt/anaconda3/lib/python3.11/site-packages/httpx/_models.py:76, in (.0) 70 self._list = list(headers._list) 71 elif isinstance(headers, Mapping): 72 self._list = [ 73 ( 74 normalize_header_key(k, lower=False, encoding=encoding), 75 normalize_header_key(k, lower=True, encoding=encoding), ---> 76 normalize_header_value(v, encoding), 77 ) 78 for k, v in headers.items() 79 ] 80 else: 81 self._list = [ 82 ( 83 normalize_header_key(k, lower=False, encoding=encoding), (...) 87 for k, v in headers 88 ]

File /opt/anaconda3/lib/python3.11/site-packages/httpx/_utils.py:53, in normalize_header_value(value, encoding) 51 if isinstance(value, bytes): 52 return value ---> 53 return value.encode(encoding or "ascii")

UnicodeEncodeError: 'ascii' codec can't encode character '\u2018' in position 7: ordinal not in range(128)

rponesoul commented 2 months ago

Moved to Databricks and will work through issues there.