KastanDay / ML4Bio-v2

LLMs to execute Bioinformatics workflows, esp. RNA-seq
MIT License
0 stars 0 forks source link

Implement an RNA-Sequence Analysis Workflow using DESEQ2. Open a new pull request on a separate branch and comment the PR number here when you're done. #6

Open KastanDay opened 2 weeks ago

KastanDay commented 2 weeks ago

Implement RNA-Sequence Analysis Workflow as per the following steps.

Analyze count data using DESEQ2 Please write and execute the code to do DESEQ2 analysis on the data. If you generate results, please push them to github and mention that in your pull request. Make sure you execute the code, and if it fails keep re-trying with improvements until you get something useful to share.

lil-jr-dev[bot] commented 2 weeks ago

Error in handle_issue_opened: 'openai_api_version' Traceback

Traceback (most recent call last):
  File "/app/ai_ta_backend/agents/github_webhook_handlers.py", line 168, in handle_issue_opened
    bot = WorkflowAgent(langsmith_run_id=langsmith_run_id)
  File "/app/ai_ta_backend/agents/langgraph_agent_v2.py", line 70, in __init__
    self.llm = get_llm()
  File "/app/ai_ta_backend/agents/langgraph_agent_v2.py", line 51, in get_llm
    return AzureChatOpenAI(
  File "/opt/venv/lib/python3.10/site-packages/langchain_core/load/serializable.py", line 107, in __init__
    super().__init__(**kwargs)
  File "pydantic/main.py", line 339, in pydantic.main.BaseModel.__init__
  File "pydantic/main.py", line 1102, in pydantic.main.validate_model
  File "/opt/venv/lib/python3.10/site-packages/langchain_openai/chat_models/azure.py", line 125, in validate_environment
    values["openai_api_version"] = values["openai_api_version"] or os.getenv(
KeyError: 'openai_api_version'
lil-jr-dev[bot] commented 2 weeks ago

👉 [Follow the bot's progress in real time on LangSmith](Failed to generate sharable URL, cannot find this run on LangSmith. RunID: 68c0330d-05c1-41a3-9fcf-19f70942d78e).

lil-jr-dev[bot] commented 2 weeks ago

Error in handle_issue_opened: 1 validation error for AzureChatOpenAI root As of openai>=1.0.0, Azure endpoints should be specified via the azure_endpoint param not openai_api_base (or alias base_url). (type=value_error) Traceback

Traceback (most recent call last):
  File "/app/ai_ta_backend/agents/github_webhook_handlers.py", line 168, in handle_issue_opened
    bot = WorkflowAgent(langsmith_run_id=langsmith_run_id)
  File "/app/ai_ta_backend/agents/langgraph_agent_v2.py", line 70, in __init__
    self.llm = get_llm()
  File "/app/ai_ta_backend/agents/langgraph_agent_v2.py", line 51, in get_llm
    return AzureChatOpenAI(
  File "/opt/venv/lib/python3.10/site-packages/langchain_core/load/serializable.py", line 107, in __init__
    super().__init__(**kwargs)
  File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for AzureChatOpenAI
__root__
  As of openai>=1.0.0, Azure endpoints should be specified via the `azure_endpoint` param not `openai_api_base` (or alias `base_url`). (type=value_error)
lil-jr-dev[bot] commented 2 weeks ago

Error in handle_issue_opened: Connection error. Traceback

Traceback (most recent call last):
  File "/opt/venv/lib/python3.10/site-packages/httpx/_transports/default.py", line 60, in map_httpcore_exceptions
    yield
  File "/opt/venv/lib/python3.10/site-packages/httpx/_transports/default.py", line 353, in handle_async_request
    resp = await self._pool.handle_async_request(req)
  File "/opt/venv/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 214, in handle_async_request
    raise UnsupportedProtocol(
httpcore.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/venv/lib/python3.10/site-packages/openai/_base_client.py", line 1437, in _request
    response = await self._client.send(
  File "/opt/venv/lib/python3.10/site-packages/httpx/_client.py", line 1617, in send
    response = await self._send_handling_auth(
  File "/opt/venv/lib/python3.10/site-packages/httpx/_client.py", line 1645, in _send_handling_auth
    response = await self._send_handling_redirects(
  File "/opt/venv/lib/python3.10/site-packages/httpx/_client.py", line 1682, in _send_handling_redirects
    response = await self._send_single_request(request)
  File "/opt/venv/lib/python3.10/site-packages/httpx/_client.py", line 1719, in _send_single_request
    response = await transport.handle_async_request(request)
  File "/opt/venv/lib/python3.10/site-packages/httpx/_transports/default.py", line 352, in handle_async_request
    with map_httpcore_exceptions():
  File "/root/.nix-profile/lib/python3.10/contextlib.py", line 153, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/opt/venv/lib/python3.10/site-packages/httpx/_transports/default.py", line 77, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/venv/lib/python3.10/site-packages/httpx/_transports/default.py", line 60, in map_httpcore_exceptions
    yield
  File "/opt/venv/lib/python3.10/site-packages/httpx/_transports/default.py", line 353, in handle_async_request
    resp = await self._pool.handle_async_request(req)
  File "/opt/venv/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 214, in handle_async_request
    raise UnsupportedProtocol(
httpcore.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/venv/lib/python3.10/site-packages/openai/_base_client.py", line 1437, in _request
    response = await self._client.send(
  File "/opt/venv/lib/python3.10/site-packages/httpx/_client.py", line 1617, in send
    response = await self._send_handling_auth(
  File "/opt/venv/lib/python3.10/site-packages/httpx/_client.py", line 1645, in _send_handling_auth
    response = await self._send_handling_redirects(
  File "/opt/venv/lib/python3.10/site-packages/httpx/_client.py", line 1682, in _send_handling_redirects
    response = await self._send_single_request(request)
  File "/opt/venv/lib/python3.10/site-packages/httpx/_client.py", line 1719, in _send_single_request
    response = await transport.handle_async_request(request)
  File "/opt/venv/lib/python3.10/site-packages/httpx/_transports/default.py", line 352, in handle_async_request
    with map_httpcore_exceptions():
  File "/root/.nix-profile/lib/python3.10/contextlib.py", line 153, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/opt/venv/lib/python3.10/site-packages/httpx/_transports/default.py", line 77, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/venv/lib/python3.10/site-packages/httpx/_transports/default.py", line 60, in map_httpcore_exceptions
    yield
  File "/opt/venv/lib/python3.10/site-packages/httpx/_transports/default.py", line 353, in handle_async_request
    resp = await self._pool.handle_async_request(req)
  File "/opt/venv/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 214, in handle_async_request
    raise UnsupportedProtocol(
httpcore.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/venv/lib/python3.10/site-packages/openai/_base_client.py", line 1437, in _request
    response = await self._client.send(
  File "/opt/venv/lib/python3.10/site-packages/httpx/_client.py", line 1617, in send
    response = await self._send_handling_auth(
  File "/opt/venv/lib/python3.10/site-packages/httpx/_client.py", line 1645, in _send_handling_auth
    response = await self._send_handling_redirects(
  File "/opt/venv/lib/python3.10/site-packages/httpx/_client.py", line 1682, in _send_handling_redirects
    response = await self._send_single_request(request)
  File "/opt/venv/lib/python3.10/site-packages/httpx/_client.py", line 1719, in _send_single_request
    response = await transport.handle_async_request(request)
  File "/opt/venv/lib/python3.10/site-packages/httpx/_transports/default.py", line 352, in handle_async_request
    with map_httpcore_exceptions():
  File "/root/.nix-profile/lib/python3.10/contextlib.py", line 153, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/opt/venv/lib/python3.10/site-packages/httpx/_transports/default.py", line 77, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/app/ai_ta_backend/agents/github_webhook_handlers.py", line 169, in handle_issue_opened
    result = await bot.run(prompt)
  File "/app/ai_ta_backend/agents/langgraph_agent_v2.py", line 153, in run
    async for event in self.workflow.astream(inputs, config={"recursion_limit": 50}):
  File "/opt/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4144, in astream
    async for item in self.bound.astream(
  File "/opt/venv/lib/python3.10/site-packages/langgraph/pregel/__init__.py", line 657, in astream
    async for chunk in self.atransform(
  File "/opt/venv/lib/python3.10/site-packages/langgraph/pregel/__init__.py", line 675, in atransform
    async for chunk in self._atransform_stream_with_config(
  File "/opt/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1600, in _atransform_stream_with_config
    chunk = cast(Output, await py_anext(iterator))
  File "/opt/venv/lib/python3.10/site-packages/langgraph/pregel/__init__.py", line 524, in _atransform
    _interrupt_or_proceed(done, inflight, step)
  File "/opt/venv/lib/python3.10/site-packages/langgraph/pregel/__init__.py", line 698, in _interrupt_or_proceed
    raise exc
  File "/opt/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4076, in ainvoke
    return await self.bound.ainvoke(
  File "/opt/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2087, in ainvoke
    input = await step.ainvoke(
  File "/opt/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3527, in ainvoke
    return await self._acall_with_config(
  File "/opt/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1297, in _acall_with_config
    output = await coro
  File "/opt/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3474, in _ainvoke
    output = await acall_func_with_variable_args(
  File "/app/ai_ta_backend/agents/langgraph_agent_v2.py", line 124, in plan_step
    plan = await planner.ainvoke({"objective": state["input"]})
  File "/opt/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2087, in ainvoke
    input = await step.ainvoke(
  File "/opt/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4076, in ainvoke
    return await self.bound.ainvoke(
  File "/opt/venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 186, in ainvoke
    llm_result = await self.agenerate_prompt(
  File "/opt/venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 554, in agenerate_prompt
    return await self.agenerate(
  File "/opt/venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 514, in agenerate
    raise exceptions[0]
  File "/opt/venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 617, in _agenerate_with_cache
    return await self._agenerate(
  File "/opt/venv/lib/python3.10/site-packages/langchain_openai/chat_models/base.py", line 614, in _agenerate
    response = await self.async_client.create(messages=message_dicts, **params)
  File "/opt/venv/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 1322, in create
    return await self._post(
  File "/opt/venv/lib/python3.10/site-packages/openai/_base_client.py", line 1705, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
  File "/opt/venv/lib/python3.10/site-packages/openai/_base_client.py", line 1408, in request
    return await self._request(
  File "/opt/venv/lib/python3.10/site-packages/openai/_base_client.py", line 1461, in _request
    return await self._retry_request(
  File "/opt/venv/lib/python3.10/site-packages/openai/_base_client.py", line 1530, in _retry_request
    return await self._request(
  File "/opt/venv/lib/python3.10/site-packages/openai/_base_client.py", line 1461, in _request
    return await self._retry_request(
  File "/opt/venv/lib/python3.10/site-packages/openai/_base_client.py", line 1530, in _retry_request
    return await self._request(
  File "/opt/venv/lib/python3.10/site-packages/openai/_base_client.py", line 1471, in _request
    raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.
lil-jr-dev[bot] commented 2 weeks ago

👉 [Follow the bot's progress in real time on LangSmith](Failed to generate sharable URL, cannot find this run on LangSmith. RunID: b350a051-57b5-431c-bd1a-da752e5dc8a7).