KastanDay / ML4Bio

LLMs to execute Bioinformatics workflows, esp. RNA-seq
MIT License
0 stars 0 forks source link

Create a full command line executable workflow for RNA-Seq on PBMC Samples. Open a new pull request (on a separate branch) and comment the PR number here when you're done. #7

Open KastanDay opened 10 months ago

KastanDay commented 10 months ago

Experiment Type: RNA-Seq Sequencing of total cellular RNA

Workflow Management: Bash/SLURM Scripting and job scheduling

Software Stack: FastQC MultiQC STAR RSEM samtools DESeq2

What else to know about the pipeline? I am working PBMC samples collected from patients that are undergoing immunotherapy.

Use the data files existing in Report_WholeBrain as input for this workflow.

You should write a series of bash scripts and R scripts that can accomplish this task. Open a PR with those scripts when you're done.

lil-jr-dev[bot] commented 10 months ago

Thanks for opening a new issue! I'll now try to finish this implementation and open a PR for you to review.

I created a new branch for my work: main.

Feel free to comment in this thread to give me additional instructions, or I'll tag you in a comment if I get stuck. If I think I'm successful I'll 'request your review' on the resulting PR. Just watch for emails while I work.

lil-jr-dev[bot] commented 10 months ago

1 agents ALL FAILED with runtime exceptions:

Traceback (most recent call last):
  File "/Users/kastanday/code/ncsa/ai-ta/ai-ta-backend/ai_ta_backend/agents/github_agent.py", line 148, in bot_runner_with_retries
    result = bot.run(f"{run_instruction}")
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/base.py", line 501, in run
    return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/base.py", line 306, in __call__
    raise e
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/base.py", line 300, in __call__
    self._call(inputs, run_manager=run_manager)
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/agents/agent.py", line 1141, in _call
    next_step_output = self._take_next_step(
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/agents/agent.py", line 928, in _take_next_step
    output = self.agent.plan(
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/agents/agent.py", line 541, in plan
    full_output = self.llm_chain.predict(callbacks=callbacks, **full_inputs)
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/llm.py", line 257, in predict
    return self(kwargs, callbacks=callbacks)[self.output_key]
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/base.py", line 306, in __call__
    raise e
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/base.py", line 300, in __call__
    self._call(inputs, run_manager=run_manager)
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/llm.py", line 93, in _call
    response = self.generate([inputs], run_manager=run_manager)
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/llm.py", line 103, in generate
    return self.llm.generate_prompt(
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/base.py", line 469, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/base.py", line 359, in generate
    raise e
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/base.py", line 349, in generate
    self._generate_with_cache(
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/base.py", line 501, in _generate_with_cache
    return self._generate(
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/openai.py", line 360, in _generate
    response = self.completion_with_retry(
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/openai.py", line 299, in completion_with_retry
    return _completion_with_retry(**kwargs)
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f
    return self(f, *args, **kw)
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__
    do = self.iter(retry_state=retry_state)
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/tenacity/__init__.py", line 314, in iter
    return fut.result()
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/concurrent/futures/_base.py", line 451, in result
    return self.__get_result()
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__
    result = fn(*args, **kwargs)
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/openai.py", line 297, in _completion_with_retry
    return self.client.create(**kwargs)
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 155, in create
    response, _, api_key = requestor.request(
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/openai/api_requestor.py", line 299, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/openai/api_requestor.py", line 710, in _interpret_response
    self._interpret_response_line(
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/openai/api_requestor.py", line 775, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8192 tokens. However, your messages resulted in 17093 tokens. Please reduce the length of the messages.
lil-jr-dev[bot] commented 10 months ago

Thanks for opening a new issue! I'll now try to finish this implementation and open a PR for you to review.

I created a new branch for my work: main.

Feel free to comment in this thread to give me additional instructions, or I'll tag you in a comment if I get stuck. If I think I'm successful I'll 'request your review' on the resulting PR. Just watch for emails while I work.

lil-jr-dev[bot] commented 10 months ago

Thanks for opening a new issue! I'll now try to finish this implementation and open a PR for you to review.

I created a new branch for my work: main.

Feel free to comment in this thread to give me additional instructions, or I'll tag you in a comment if I get stuck. If I think I'm successful I'll 'request your review' on the resulting PR. Just watch for emails while I work.

lil-jr-dev[bot] commented 10 months ago

1 agents ALL FAILED with runtime exceptions:

Traceback (most recent call last):
  File "/Users/kastanday/code/ncsa/ai-ta/ai-ta-backend/ai_ta_backend/agents/github_agent.py", line 148, in bot_runner_with_retries
    result = bot.run(f"{run_instruction}")
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/base.py", line 501, in run
    return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/base.py", line 306, in __call__
    raise e
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/base.py", line 300, in __call__
    self._call(inputs, run_manager=run_manager)
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/agents/agent.py", line 1141, in _call
    next_step_output = self._take_next_step(
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/agents/agent.py", line 928, in _take_next_step
    output = self.agent.plan(
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/agents/agent.py", line 541, in plan
    full_output = self.llm_chain.predict(callbacks=callbacks, **full_inputs)
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/llm.py", line 257, in predict
    return self(kwargs, callbacks=callbacks)[self.output_key]
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/base.py", line 306, in __call__
    raise e
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/base.py", line 300, in __call__
    self._call(inputs, run_manager=run_manager)
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/llm.py", line 93, in _call
    response = self.generate([inputs], run_manager=run_manager)
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/llm.py", line 103, in generate
    return self.llm.generate_prompt(
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/base.py", line 469, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/base.py", line 359, in generate
    raise e
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/base.py", line 349, in generate
    self._generate_with_cache(
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/base.py", line 501, in _generate_with_cache
    return self._generate(
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/openai.py", line 360, in _generate
    response = self.completion_with_retry(
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/openai.py", line 299, in completion_with_retry
    return _completion_with_retry(**kwargs)
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f
    return self(f, *args, **kw)
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__
    do = self.iter(retry_state=retry_state)
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/tenacity/__init__.py", line 314, in iter
    return fut.result()
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/concurrent/futures/_base.py", line 451, in result
    return self.__get_result()
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__
    result = fn(*args, **kwargs)
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/openai.py", line 297, in _completion_with_retry
    return self.client.create(**kwargs)
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 155, in create
    response, _, api_key = requestor.request(
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/openai/api_requestor.py", line 299, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/openai/api_requestor.py", line 710, in _interpret_response
    self._interpret_response_line(
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/openai/api_requestor.py", line 775, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8192 tokens. However, your messages resulted in 20327 tokens. Please reduce the length of the messages.
lil-jr-dev[bot] commented 10 months ago

Thanks for opening a new issue! I'll now try to finish this implementation and open a PR for you to review.

I created a new branch for my work: main.

Feel free to comment in this thread to give me additional instructions, or I'll tag you in a comment if I get stuck. If I think I'm successful I'll 'request your review' on the resulting PR. Just watch for emails while I work.

lil-jr-dev[bot] commented 10 months ago

[Errno 404 Client Error: Not Found for url: https://api.smith.langchain.com/runs/13de474a-0282-4071-8418-441235ea0a4e/share] {"detail":"Resource not found"}

lil-jr-dev[bot] commented 10 months ago

Thanks for opening a new issue! I'll now try to finish this implementation and open a PR for you to review.

I created a new branch for my work: main.

Feel free to comment in this thread to give me additional instructions, or I'll tag you in a comment if I get stuck. If I think I'm successful I'll 'request your review' on the resulting PR. Just watch for emails while I work.

lil-jr-dev[bot] commented 10 months ago

The task requires creating a full command line executable workflow for RNA-Seq on PBMC Samples. The workflow should include

lil-jr-dev[bot] commented 10 months ago

Thanks for opening a new issue! I'll now try to finish this implementation and open a PR for you to review.

I created a new branch for my work: main. You can monitor the LangSmith trace here

Feel free to comment in this thread to give me additional instructions, or I'll tag you in a comment if I get stuck. If I think I'm successful I'll 'request your review' on the resulting PR. Just watch for emails while I work.

lil-jr-dev[bot] commented 10 months ago

Thanks for opening a new issue! I'll now try to finish this implementation and open a PR for you to review.

You can monitor the LangSmith trace here.

I created a new branch for my work: main.

Feel free to comment in this thread to give me additional instructions, or I'll tag you in a comment if I get stuck. If I think I'm successful I'll 'request your review' on the resulting PR. Just watch for emails while I work.

lil-jr-dev[bot] commented 10 months ago

Thanks for opening a new issue! I'll now try to finish this implementation and open a PR for you to review.

You can monitor the LangSmith trace here.

I created a new branch for my work: main.

Feel free to comment in this thread to give me additional instructions, or I'll tag you in a comment if I get stuck. If I think I'm successful I'll 'request your review' on the resulting PR. Just watch for emails while I work.

lil-jr-dev[bot] commented 10 months ago

name 'run_id' is not defined

lil-jr-dev[bot] commented 10 months ago

Thanks for opening a new issue! I'll now try to finish this implementation and open a PR for you to review.

You can monitor the LangSmith trace here.

I created a new branch for my work: main.

Feel free to comment in this thread to give me additional instructions, or I'll tag you in a comment if I get stuck. If I think I'm successful I'll 'request your review' on the resulting PR. Just watch for emails while I work.

lil-jr-dev[bot] commented 10 months ago

Thanks for opening a new issue! I'll now try to finish this implementation and open a PR for you to review.

You can monitor the LangSmith trace here.

I created a new branch for my work: main.

Feel free to comment in this thread to give me additional instructions, or I'll tag you in a comment if I get stuck. If I think I'm successful I'll 'request your review' on the resulting PR. Just watch for emails while I work.

lil-jr-dev[bot] commented 10 months ago

Thanks for opening a new issue! I'll now try to finish this implementation and open a PR for you to review.

You can monitor the LangSmith trace here.

I created a new branch for my work: main.

Feel free to comment in this thread to give me additional instructions, or I'll tag you in a comment if I get stuck. If I think I'm successful I'll 'request your review' on the resulting PR. Just watch for emails while I work.

lil-jr-dev[bot] commented 10 months ago

1 agents ALL FAILED with runtime exceptions:

Traceback (most recent call last):
  File "/Users/kastanday/code/ncsa/ai-ta/ai-ta-backend/ai_ta_backend/agents/github_agent.py", line 137, in bot_runner_with_retries
    result = bot.run(f"{run_instruction}")
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/base.py", line 501, in run
    return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/base.py", line 306, in __call__
    raise e
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/base.py", line 300, in __call__
    self._call(inputs, run_manager=run_manager)
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/agents/agent.py", line 1141, in _call
    next_step_output = self._take_next_step(
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/agents/agent.py", line 928, in _take_next_step
    output = self.agent.plan(
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/agents/agent.py", line 541, in plan
    full_output = self.llm_chain.predict(callbacks=callbacks, **full_inputs)
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/llm.py", line 257, in predict
    return self(kwargs, callbacks=callbacks)[self.output_key]
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/base.py", line 306, in __call__
    raise e
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/base.py", line 300, in __call__
    self._call(inputs, run_manager=run_manager)
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/llm.py", line 93, in _call
    response = self.generate([inputs], run_manager=run_manager)
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/llm.py", line 103, in generate
    return self.llm.generate_prompt(
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/base.py", line 469, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/base.py", line 359, in generate
    raise e
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/base.py", line 349, in generate
    self._generate_with_cache(
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/base.py", line 501, in _generate_with_cache
    return self._generate(
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/openai.py", line 360, in _generate
    response = self.completion_with_retry(
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/openai.py", line 299, in completion_with_retry
    return _completion_with_retry(**kwargs)
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f
    return self(f, *args, **kw)
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__
    do = self.iter(retry_state=retry_state)
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/tenacity/__init__.py", line 314, in iter
    return fut.result()
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/concurrent/futures/_base.py", line 451, in result
    return self.__get_result()
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__
    result = fn(*args, **kwargs)
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/openai.py", line 297, in _completion_with_retry
    return self.client.create(**kwargs)
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 155, in create
    response, _, api_key = requestor.request(
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/openai/api_requestor.py", line 299, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/openai/api_requestor.py", line 710, in _interpret_response
    self._interpret_response_line(
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/openai/api_requestor.py", line 775, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8192 tokens. However, your messages resulted in 9436 tokens. Please reduce the length of the messages.
lil-jr-dev[bot] commented 10 months ago

Thanks for opening a new issue! I'll now try to finish this implementation and open a PR for you to review.

You can monitor the LangSmith trace here.

I created a new branch for my work: main.

Feel free to comment in this thread to give me additional instructions, or I'll tag you in a comment if I get stuck. If I think I'm successful I'll 'request your review' on the resulting PR. Just watch for emails while I work.

lil-jr-dev[bot] commented 10 months ago

1 agents ALL FAILED with runtime exceptions:

Traceback (most recent call last):
  File "/Users/kastanday/code/ncsa/ai-ta/ai-ta-backend/ai_ta_backend/agents/github_agent.py", line 139, in bot_runner_with_retries
    result = bot.with_config({"run_name": "ReAct ML4Bio Agent"}).invoke({"input": run_instruction}, {"metadata": {"run_id_in_metadata": str(run_id_in_metadata)}})
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/schema/runnable/base.py", line 2316, in invoke
    return self.bound.invoke(
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/base.py", line 84, in invoke
    return self(
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/base.py", line 306, in __call__
    raise e
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/base.py", line 300, in __call__
    self._call(inputs, run_manager=run_manager)
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/agents/agent.py", line 1141, in _call
    next_step_output = self._take_next_step(
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/agents/agent.py", line 928, in _take_next_step
    output = self.agent.plan(
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/agents/agent.py", line 541, in plan
    full_output = self.llm_chain.predict(callbacks=callbacks, **full_inputs)
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/llm.py", line 257, in predict
    return self(kwargs, callbacks=callbacks)[self.output_key]
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/base.py", line 306, in __call__
    raise e
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/base.py", line 300, in __call__
    self._call(inputs, run_manager=run_manager)
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/llm.py", line 93, in _call
    response = self.generate([inputs], run_manager=run_manager)
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chains/llm.py", line 103, in generate
    return self.llm.generate_prompt(
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/base.py", line 469, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/base.py", line 359, in generate
    raise e
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/base.py", line 349, in generate
    self._generate_with_cache(
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/base.py", line 501, in _generate_with_cache
    return self._generate(
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/openai.py", line 360, in _generate
    response = self.completion_with_retry(
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/openai.py", line 299, in completion_with_retry
    return _completion_with_retry(**kwargs)
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f
    return self(f, *args, **kw)
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__
    do = self.iter(retry_state=retry_state)
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/tenacity/__init__.py", line 314, in iter
    return fut.result()
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/concurrent/futures/_base.py", line 451, in result
    return self.__get_result()
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__
    result = fn(*args, **kwargs)
  File "/Users/kastanday/code/ncsa/ai-ta/langchain-improved-agents/libs/langchain/langchain/chat_models/openai.py", line 297, in _completion_with_retry
    return self.client.create(**kwargs)
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 155, in create
    response, _, api_key = requestor.request(
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/openai/api_requestor.py", line 299, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/openai/api_requestor.py", line 710, in _interpret_response
    self._interpret_response_line(
  File "/Users/kastanday/miniforge3/envs/flask10_py10/lib/python3.10/site-packages/openai/api_requestor.py", line 775, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8192 tokens. However, your messages resulted in 10032 tokens. Please reduce the length of the messages.