**Failure summary:**
The action failed due to the following reasons:
The test test_list_all in tests/test_cli/test_actions.py failed because the command actions returned an exit code of 1 instead of 0.
The test test_list in tests/test_cli/test_integrations.py failed because the user was not logged in, resulting in an error message: "Error: User not logged in, please login using composio login".
Relevant error logs:
```yaml
1: ##[group]Operating System
2: Ubuntu
...
491: * [new branch] feat/opensource-ready -> origin/feat/opensource-ready
492: * [new branch] fix/readme -> origin/fix/readme
493: * [new branch] fix/readme-logo -> origin/fix/readme-logo
494: * [new branch] fix/tests -> origin/fix/tests
495: * [new branch] ft-add-better-help-text -> origin/ft-add-better-help-text
496: * [new branch] ft-apps-id -> origin/ft-apps-id
497: * [new branch] ft-bring-back-core-sdk -> origin/ft-bring-back-core-sdk
498: * [new branch] ft-did-you-mean -> origin/ft-did-you-mean
499: * [new branch] ft-error-tracking -> origin/ft-error-tracking
...
873: composio/local_tools/local_workspace/tests/test_workspace.py::TestCreateWorkspaceAction::test_create_workspace SKIPPED [ 5%]
874: composio/local_tools/local_workspace/tests/test_workspace.py::TestCmds::test_create_dir_cmd SKIPPED [ 8%]
875: tests/test_example.py::test_example[example0] SKIPPED (Testing in CI
876: will lead to too much LLM API usage) [ 10%]
877: tests/test_example.py::test_example[example1] SKIPPED (Testing in CI
878: will lead to too much LLM API usage) [ 13%]
879: tests/test_example.py::test_example[example2] SKIPPED (Testing in CI
880: will lead to too much LLM API usage) [ 16%]
881: tests/test_cli/test_actions.py::TestListActions::test_list_all[arguments0-exptected_outputs0-unexptected_outputs0] FAILED [ 18%]
882: tests/test_cli/test_actions.py::TestListActions::test_list_all[arguments1-exptected_outputs1-unexptected_outputs1] FAILED [ 21%]
883: tests/test_cli/test_actions.py::TestListActions::test_list_all[arguments2-exptected_outputs2-unexptected_outputs2] PASSED [ 24%]
884: tests/test_cli/test_actions.py::TestListActions::test_limit SKIPPED [ 27%]
885: tests/test_cli/test_actions.py::TestListActions::test_copy PASSED [ 29%]
886: tests/test_cli/test_apps.py::TestList::test_list PASSED [ 32%]
887: tests/test_cli/test_apps.py::TestUpdate::test_update PASSED [ 35%]
888: tests/test_cli/test_connections.py::TestListConnections::test_list_all PASSED [ 37%]
889: tests/test_cli/test_connections.py::TestListConnections::test_list_one PASSED [ 40%]
890: tests/test_cli/test_context.py::test_login_required_decorator PASSED [ 43%]
891: tests/test_cli/test_integrations.py::TestIntegration::test_list FAILED [ 45%]
...
904: tests/test_tools/test_schema.py::test_openai_schema PASSED [ 81%]
905: tests/test_tools/test_schema.py::test_claude_schema PASSED [ 83%]
906: tests/test_utils/test_decorators.py::test_deprecated PASSED [ 86%]
907: tests/test_utils/test_git.py::test_get_git_user_info PASSED [ 89%]
908: tests/test_utils/test_shared.py::test_get_pydantic_signature_format_from_schema_params PASSED [ 91%]
909: tests/test_utils/test_shared.py::test_json_schema_to_pydantic_field PASSED [ 94%]
910: tests/test_utils/test_shared.py::test_json_schema_to_fields_dict PASSED [ 97%]
911: tests/test_utils/test_url.py::test_get_web_url PASSED [100%]
912: =================================== FAILURES ===================================
...
934: exptected_outputs: t.Tuple[str, ...],
935: unexptected_outputs: t.Tuple[str, ...],
936: ) -> None:
937: """Test list all actions."""
938: result = self.run("actions", *arguments)
939: > assert result.exit_code == 0
940: E assert 1 == 0
941: E + where 1 = .exit_code
942: tests/test_cli/test_actions.py:39: AssertionError
...
965: exptected_outputs: t.Tuple[str, ...],
966: unexptected_outputs: t.Tuple[str, ...],
967: ) -> None:
968: """Test list all actions."""
969: result = self.run("actions", *arguments)
970: > assert result.exit_code == 0
971: E assert 1 == 0
972: E + where 1 = .exit_code
973: tests/test_cli/test_actions.py:39: AssertionError
974: __________________________ TestIntegration.test_list ___________________________
975: self =
976: def test_list(self) -> None:
977: """Test list integrations."""
978: result = self.run("integrations")
979: > assert result.exit_code == 0, result.stderr
980: E AssertionError: Error: User not logged in, please login using `composio login`
981: E
982: E assert 1 == 0
983: E + where 1 = .exit_code
984: tests/test_cli/test_integrations.py:15: AssertionError
985: =============================== warnings summary ===============================
986: composio/client/http.py:12
987: /home/runner/work/composio/composio/python/composio/client/http.py:12: DeprecationWarning: Inheritance class AsyncHttpClient from ClientSession is discouraged
988: class AsyncHttpClient(AsyncSession):
989: .tox/unittests/lib/python3.10/site-packages/pydantic/_internal/_config.py:284
990: /home/runner/work/composio/composio/python/.tox/unittests/lib/python3.10/site-packages/pydantic/_internal/_config.py:284: PydanticDeprecatedSince20: Support for class-based `config` is deprecated, use ConfigDict instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.7/migration/
...
1103: composio/utils/shared.py 117 83 29% 44, 47-51, 54-58, 61-77, 83, 101-104, 153-158, 174-221, 247-292
1104: composio/utils/url.py 10 1 90% 35
1105: examples/crewai_ci_chart.py 15 15 0% 1-38
1106: --------------------------------------------------------------------------------------------------------------------
1107: TOTAL 7947 1592 80%
1108: Coverage HTML written to dir htmlcov
1109: Coverage XML written to file coverage.xml
1110: =========================== short test summary info ============================
1111: FAILED tests/test_cli/test_actions.py::TestListActions::test_list_all[arguments0-exptected_outputs0-unexptected_outputs0] - assert 1 == 0
1112: + where 1 = .exit_code
1113: FAILED tests/test_cli/test_actions.py::TestListActions::test_list_all[arguments1-exptected_outputs1-unexptected_outputs1] - assert 1 == 0
1114: + where 1 = .exit_code
1115: FAILED tests/test_cli/test_integrations.py::TestIntegration::test_list - AssertionError: Error: User not logged in, please login using `composio login`
1116: assert 1 == 0
1117: + where 1 = .exit_code
1118: ============= 3 failed, 28 passed, 6 skipped, 2 warnings in 11.60s =============
1119: unittests: exit 1 (12.79 seconds) /home/runner/work/composio/composio/python> pytest -vvv -rfE --doctest-modules composio/ tests/ --cov=composio --cov=examples --cov-report=html --cov-report=xml --cov-report=term --cov-report=term-missing --cov-config=.coveragerc pid=5611
1120: .pkg: _exit> python /opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
1121: unittests: FAIL code 1 (32.63=setup[19.84]+cmd[12.79] seconds)
1122: evaluation failed :( (33.01 seconds)
1123: ##[error]Process completed with exit code 1.
```
✨ CI feedback usage guide:
The CI feedback tool (`/checks)` automatically triggers when a PR has a failed check.
The tool analyzes the failed checks and provides several feedbacks:
- Failed stage
- Failed test name
- Failure summary
- Relevant error logs
In addition to being automatically triggered, the tool can also be invoked manually by commenting on a PR:
```
/checks "https://github.com/{repo_name}/actions/runs/{run_number}/job/{job_number}"
```
where `{repo_name}` is the name of the repository, `{run_number}` is the run number of the failed check, and `{job_number}` is the job number of the failed check.
#### Configuration options
- `enable_auto_checks_feedback` - if set to true, the tool will automatically provide feedback when a check is failed. Default is true.
- `excluded_checks_list` - a list of checks to exclude from the feedback, for example: ["check1", "check2"]. Default is an empty list.
- `enable_help_text` - if set to true, the tool will provide a help message with the feedback. Default is true.
- `persistent_comment` - if set to true, the tool will overwrite a previous checks comment with the new feedback. Default is true.
- `final_update_message` - if `persistent_comment` is true and updating a previous checks message, the tool will also create a new message: "Persistent checks updated to latest commit". Default is true.
See more information about the `checks` tool in the [docs](https://pr-agent-docs.codium.ai/tools/ci_feedback/).
CI Failure Feedback 🧐
(Checks updated until commit https://github.com/ComposioHQ/composio/commit/6f22f1a195013e41ac73390906d8cfeed574c0be)
test_list_all
intests/test_cli/test_actions.py
failed because the commandactions
returnedan exit code of 1 instead of 0.
test_list
intests/test_cli/test_integrations.py
failed because the user was not logged in,resulting in an error message: "Error: User not logged in, please login using
composio login
".Relevant error logs:
```yaml 1: ##[group]Operating System 2: Ubuntu ... 491: * [new branch] feat/opensource-ready -> origin/feat/opensource-ready 492: * [new branch] fix/readme -> origin/fix/readme 493: * [new branch] fix/readme-logo -> origin/fix/readme-logo 494: * [new branch] fix/tests -> origin/fix/tests 495: * [new branch] ft-add-better-help-text -> origin/ft-add-better-help-text 496: * [new branch] ft-apps-id -> origin/ft-apps-id 497: * [new branch] ft-bring-back-core-sdk -> origin/ft-bring-back-core-sdk 498: * [new branch] ft-did-you-mean -> origin/ft-did-you-mean 499: * [new branch] ft-error-tracking -> origin/ft-error-tracking ... 873: composio/local_tools/local_workspace/tests/test_workspace.py::TestCreateWorkspaceAction::test_create_workspace SKIPPED [ 5%] 874: composio/local_tools/local_workspace/tests/test_workspace.py::TestCmds::test_create_dir_cmd SKIPPED [ 8%] 875: tests/test_example.py::test_example[example0] SKIPPED (Testing in CI 876: will lead to too much LLM API usage) [ 10%] 877: tests/test_example.py::test_example[example1] SKIPPED (Testing in CI 878: will lead to too much LLM API usage) [ 13%] 879: tests/test_example.py::test_example[example2] SKIPPED (Testing in CI 880: will lead to too much LLM API usage) [ 16%] 881: tests/test_cli/test_actions.py::TestListActions::test_list_all[arguments0-exptected_outputs0-unexptected_outputs0] FAILED [ 18%] 882: tests/test_cli/test_actions.py::TestListActions::test_list_all[arguments1-exptected_outputs1-unexptected_outputs1] FAILED [ 21%] 883: tests/test_cli/test_actions.py::TestListActions::test_list_all[arguments2-exptected_outputs2-unexptected_outputs2] PASSED [ 24%] 884: tests/test_cli/test_actions.py::TestListActions::test_limit SKIPPED [ 27%] 885: tests/test_cli/test_actions.py::TestListActions::test_copy PASSED [ 29%] 886: tests/test_cli/test_apps.py::TestList::test_list PASSED [ 32%] 887: tests/test_cli/test_apps.py::TestUpdate::test_update PASSED [ 35%] 888: tests/test_cli/test_connections.py::TestListConnections::test_list_all PASSED [ 37%] 889: tests/test_cli/test_connections.py::TestListConnections::test_list_one PASSED [ 40%] 890: tests/test_cli/test_context.py::test_login_required_decorator PASSED [ 43%] 891: tests/test_cli/test_integrations.py::TestIntegration::test_list FAILED [ 45%] ... 904: tests/test_tools/test_schema.py::test_openai_schema PASSED [ 81%] 905: tests/test_tools/test_schema.py::test_claude_schema PASSED [ 83%] 906: tests/test_utils/test_decorators.py::test_deprecated PASSED [ 86%] 907: tests/test_utils/test_git.py::test_get_git_user_info PASSED [ 89%] 908: tests/test_utils/test_shared.py::test_get_pydantic_signature_format_from_schema_params PASSED [ 91%] 909: tests/test_utils/test_shared.py::test_json_schema_to_pydantic_field PASSED [ 94%] 910: tests/test_utils/test_shared.py::test_json_schema_to_fields_dict PASSED [ 97%] 911: tests/test_utils/test_url.py::test_get_web_url PASSED [100%] 912: =================================== FAILURES =================================== ... 934: exptected_outputs: t.Tuple[str, ...], 935: unexptected_outputs: t.Tuple[str, ...], 936: ) -> None: 937: """Test list all actions.""" 938: result = self.run("actions", *arguments) 939: > assert result.exit_code == 0 940: E assert 1 == 0 941: E + where 1 =✨ CI feedback usage guide:
The CI feedback tool (`/checks)` automatically triggers when a PR has a failed check. The tool analyzes the failed checks and provides several feedbacks: - Failed stage - Failed test name - Failure summary - Relevant error logs In addition to being automatically triggered, the tool can also be invoked manually by commenting on a PR: ``` /checks "https://github.com/{repo_name}/actions/runs/{run_number}/job/{job_number}" ``` where `{repo_name}` is the name of the repository, `{run_number}` is the run number of the failed check, and `{job_number}` is the job number of the failed check. #### Configuration options - `enable_auto_checks_feedback` - if set to true, the tool will automatically provide feedback when a check is failed. Default is true. - `excluded_checks_list` - a list of checks to exclude from the feedback, for example: ["check1", "check2"]. Default is an empty list. - `enable_help_text` - if set to true, the tool will provide a help message with the feedback. Default is true. - `persistent_comment` - if set to true, the tool will overwrite a previous checks comment with the new feedback. Default is true. - `final_update_message` - if `persistent_comment` is true and updating a previous checks message, the tool will also create a new message: "Persistent checks updated to latest commit". Default is true. See more information about the `checks` tool in the [docs](https://pr-agent-docs.codium.ai/tools/ci_feedback/).