ComposioHQ / composio

Composio equip's your AI agents & LLMs with 100+ high-quality integrations via function calling
https://docs.composio.dev
Other
8.75k stars 3.24k forks source link

feat: add example tests for js #191

Closed utkarsh-dixit closed 2 months ago

utkarsh-dixit commented 3 months ago

PR Type

Tests, Enhancement


Description


Changes walkthrough πŸ“

Relevant files
Tests
run-tests.spec.ts
Add Playwright tests for example projects                               

js/tests/run-tests.spec.ts
  • Added a test script to run example projects using Playwright.
  • Implemented steps to build and start example projects.
  • Included assertions to validate expected output.
  • +43/-0   
    .last-run.json
    Add file to store last test run results                                   

    js/test-results/.last-run.json - Added file to store the results of the last test run.
    +4/-0     
    Enhancement
    playwright.config.ts
    Configure Playwright settings for testing                               

    js/playwright.config.ts
  • Configured Playwright with a timeout and reporter.
  • Set base URL and launch options for tests.
  • Added HTTP headers including authorization token.
  • +23/-0   
    package.json
    Update package.json to include Playwright tests                   

    js/package.json
  • Updated test script to run Playwright tests.
  • Added Playwright as a dependency.
  • +2/-1     
    package.json
    Add start script for e2e example                                                 

    js/examples/e2e/package.json - Added start script to run demo.mjs.
    +1/-0     
    package.json
    Add start script for OpenAI example                                           

    js/examples/openai/package.json - Added start script to run demo.mjs.
    +1/-0     
    package.json
    Add start script for LangChain example                                     

    js/examples/langchain/package.json - Added start script to run demo.mjs.
    +1/-0     

    πŸ’‘ PR-Agent usage: Comment /help on the PR to get a list of all available PR-Agent tools and their descriptions

    codiumai-pr-agent-pro[bot] commented 3 months ago

    PR Reviewer Guide πŸ”

    ⏱️ Estimated effort to review [1-5] 3
    πŸ§ͺ Relevant tests Yes
    πŸ”’ Security concerns - Sensitive information exposure:
    The configuration file explicitly includes an authorization token using `process.env.API_TOKEN`. Ensure that this token is securely managed and not exposed in logs or error messages. Consider using secrets management tools for better security practices.
    ⚑ Key issues to review Possible Bug:
    The use of synchronous and asynchronous exec calls within the same test step could lead to race conditions or unhandled promise rejections. Consider using async/await consistently for better error handling and control flow.
    Error Handling:
    The error handling in the test script could be improved by adding more specific error messages and handling specific types of exceptions more gracefully.
    codiumai-pr-agent-pro[bot] commented 3 months ago

    PR Code Suggestions ✨

    CategorySuggestion                                                                                                                                    Score
    Best practice
    Add .last-run.json to .gitignore to avoid committing frequently changing test result files ___ **Consider removing the .last-run.json file from version control by adding it to .gitignore,
    as it is likely to change frequently and may not be relevant to all developers.** [js/test-results/.last-run.json [1-4]](https://github.com/ComposioHQ/composio/pull/191/files#diff-6644a839facffb23e2c5b38de3148d0c9ea52f27f0419fe54bb9a04ae072cf19R1-R4) ```diff -{ - "status": "passed", - "failedTests": [] -} +# Add `.last-run.json` to `.gitignore` file ``` - [ ] **Apply this suggestion**
    Suggestion importance[1-10]: 8 Why: Adding frequently changing files like test results to `.gitignore` is a best practice to keep the repository clean and relevant, which makes this suggestion very useful.
    8
    Use Playwright APIs instead of execSync and exec for better control and error handling ___ **Instead of using execSync and exec within the test, consider using Playwright's page.goto
    and other Playwright APIs to interact with the examples. This will provide better control
    over the test flow and error handling.** [js/tests/run-tests.spec.ts [17-35]](https://github.com/ComposioHQ/composio/pull/191/files#diff-b41ef3bc21900850abd262a5c6c563e7c38345a3b7fe580914a503895b430507R17-R35) ```diff -execSync(`pnpm build && cd ${exampleDir} && pnpm link ../../`); -exec(`pnpm build && cd ${exampleDir} && pnpm start`, (error, stdout, stderr) => { - if (error) { - console.error(`exec error: ${error}`); - reject(error); - return; - } - console.log(`stdout: ${stdout}`); - console.error(`stderr: ${stderr}`); - - // Assert some stuff on stdout for test checks - try { - expect(stdout).toContain('Expected output'); - expect(stderr).toBe(''); - resolve(); - } catch (assertionError) { - reject(assertionError); - } -}); +await page.goto(`file://${exampleDir}/index.html`); +// Add further interactions and assertions using Playwright APIs +const content = await page.content(); +expect(content).toContain('Expected output'); ``` - [ ] **Apply this suggestion**
    Suggestion importance[1-10]: 7 Why: The suggestion to use Playwright APIs for better control and error handling is valid and improves test reliability and readability. However, it's not a critical bug fix, hence the score.
    7
    Use Playwright's built-in error handling and reporting mechanisms instead of manual try-catch blocks ___ **Instead of catching and logging errors within the test, consider using Playwright's
    built-in error handling and reporting mechanisms to provide more structured and
    informative test results.** [js/tests/run-tests.spec.ts [9-42]](https://github.com/ComposioHQ/composio/pull/191/files#diff-b41ef3bc21900850abd262a5c6c563e7c38345a3b7fe580914a503895b430507R9-R42) ```diff -try { - const files = fs.readdirSync(examplesDir); - // ... -} catch (err) { - console.error(`Unable to read examples directory: ${err}`); -} +const files = await fs.promises.readdir(examplesDir); +// ... ```
    Suggestion importance[1-10]: 5 Why: While using built-in error handling can make the code cleaner, the suggestion incorrectly replaces synchronous file reading with an asynchronous one without adjusting the surrounding code context, which could lead to issues.
    5
    Enhancement
    Add a pretest script to ensure dependencies are installed before running tests ___ **Consider adding a pretest script to ensure that the necessary dependencies are installed
    before running the tests. This can help avoid issues where tests fail due to missing
    dependencies.** [js/package.json [6-10]](https://github.com/ComposioHQ/composio/pull/191/files#diff-e51a40ac250c9696142466f114f754161e7e5102c0cdb5354548b757deb272f6R6-R10) ```diff "scripts": { + "pretest": "npm install", "test": "playwright test tests/*", "build": "tsc --project . --outDir lib", "type-docs": "typedoc" }, ``` - [ ] **Apply this suggestion**
    Suggestion importance[1-10]: 7 Why: Adding a `pretest` script can help ensure that all dependencies are installed before tests are run, which is a good practice to prevent failures due to missing dependencies. This suggestion is relevant and enhances maintainability.
    7
    Add a postinstall script to build the project after dependencies are installed ___ **It might be beneficial to add a postinstall script to run npm run build after dependencies
    are installed. This ensures that the project is built and ready to use immediately after
    installation.** [js/package.json [6-10]](https://github.com/ComposioHQ/composio/pull/191/files#diff-e51a40ac250c9696142466f114f754161e7e5102c0cdb5354548b757deb272f6R6-R10) ```diff "scripts": { "test": "playwright test tests/*", "build": "tsc --project . --outDir lib", - "type-docs": "typedoc" + "type-docs": "typedoc", + "postinstall": "npm run build" }, ``` - [ ] **Apply this suggestion**
    Suggestion importance[1-10]: 7 Why: The suggestion to add a `postinstall` script that automatically builds the project after installation is a good practice for ensuring the project is immediately usable after setup. This enhances the user experience and project readiness.
    7
    Add a prestart script to ensure dependencies are installed before starting the demo ___ **Consider adding a prestart script to ensure that the necessary dependencies are installed
    before starting the demo. This can help avoid runtime errors due to missing dependencies.** [js/examples/e2e/package.json [6-9]](https://github.com/ComposioHQ/composio/pull/191/files#diff-d0f15c3002f5dc908b876bf16a03684f13c9ed7645c7b387f142366485bb5780R6-R9) ```diff "scripts": { + "prestart": "npm install", "start": "node demo.mjs", "test": "echo \"Error: no test specified\" && exit 1" }, ``` - [ ] **Apply this suggestion**
    Suggestion importance[1-10]: 7 Why: Adding a `prestart` script is beneficial for ensuring all necessary dependencies are installed before the demo starts, which can prevent runtime errors. This suggestion is practical and improves the robustness of the demo setup.
    7
    Add a script for running tests to the scripts section ___ **Consider adding a script for running tests, such as "test": "mocha" or another testing
    framework, to facilitate automated testing.** [js/examples/langchain/package.json [7-8]](https://github.com/ComposioHQ/composio/pull/191/files#diff-4ec75cea39e08ab418b2166ec8113e37f461322488d600bfeefd3bbf696bfbf8R7-R8) ```diff "start": "node demo.mjs", -"test": "echo \"Error: no test specified\" && exit 1" +"test": "mocha" ``` - [ ] **Apply this suggestion**
    Suggestion importance[1-10]: 7 Why: The suggestion to replace a placeholder test script with a functional one like "mocha" is beneficial for enabling actual automated testing, which is a good practice.
    7
    Possible issue
    Add a timeout to the exec function to prevent the test from hanging indefinitely ___ **Add a timeout to the exec function to prevent the test from hanging indefinitely if the
    command fails to complete.** [js/tests/run-tests.spec.ts [18-35]](https://github.com/ComposioHQ/composio/pull/191/files#diff-b41ef3bc21900850abd262a5c6c563e7c38345a3b7fe580914a503895b430507R18-R35) ```diff -exec(`pnpm build && cd ${exampleDir} && pnpm start`, (error, stdout, stderr) => { +exec(`pnpm build && cd ${exampleDir} && pnpm start`, { timeout: 30000 }, (error, stdout, stderr) => { if (error) { console.error(`exec error: ${error}`); reject(error); return; } console.log(`stdout: ${stdout}`); console.error(`stderr: ${stderr}`); // Assert some stuff on stdout for test checks try { expect(stdout).toContain('Expected output'); expect(stderr).toBe(''); resolve(); } catch (assertionError) { reject(assertionError); } }); ``` - [ ] **Apply this suggestion**
    Suggestion importance[1-10]: 6 Why: Adding a timeout is a good practice to prevent tests from hanging, which can improve the robustness of test execution. It's a minor but useful improvement.
    6
    codiumai-pr-agent-pro[bot] commented 3 months ago

    CI Failure Feedback 🧐

    (Checks updated until commit https://github.com/ComposioHQ/composio/commit/56005ccff5311a88dcb3584285e2487a1c97222a)

    **Action:** JS tests
    **Failed stage:** [Run tests](https://github.com/ComposioHQ/composio/actions/runs/9743888574/job/26888344993) [❌]
    **Failed test name:** tests/run-tests.spec.ts:13:9 β€Ί e2e tests/run-tests.spec.ts:13:9 β€Ί langchain tests/run-tests.spec.ts:13:9 β€Ί openai
    **Failure summary:** The action failed due to multiple errors in different tests:
  • e2e test failed because of a BadRequestError caused by a validation error in the request payload.
    The data property must be an object and should not be empty.
  • langchain test failed because the received value in the expect assertion was undefined, which is not
    allowed.
  • openai test failed due to a TypeError caused by attempting to read properties of undefined
    (specifically, no_auth property in app.yaml).
  • Relevant error logs: ```yaml 1: ##[group]Operating System 2: Ubuntu ... 649: COMPOSIO_BASE_URL: *** 650: OPENAI_API_KEY: *** 651: ##[endgroup] 652: > composio-core@0.1.4 test /home/runner/work/composio/composio/js 653: > playwright test tests/* 654: Running 3 tests using 1 worker 655: Running example: e2e 656: /home/runner/work/composio/composio/js/lib/sdk/client/core/request.js:264 657: throw new ApiError_1.ApiError(options, result, error); 658: ^ 659: ApiError: Bad Request 660: at catchErrorCodes (/home/runner/work/composio/composio/js/lib/sdk/client/core/request.js:264:15) 661: at /home/runner/work/composio/composio/js/lib/sdk/client/core/request.js:309:45 662:  at process.processTicksAndRejections (node:internal/process/task_queues:95:5) { 663: url: '***/v1/connectedAccounts', 664: status: 400, 665: statusText: 'Bad Request', 666: body: { 667: message: 'Validation error. Please check your input.', 668: errors: [ ... 674: property: 'data', 675: children: [], 676: constraints: { 677: isObject: 'data must be an object', 678: isNotEmpty: 'data should not be empty' 679: } 680: } 681: ], 682: stack: 'Error: \n' + 683: ' at new HttpError (/app/node_modules/.pnpm/routing-controllers@0.10.4_class-transformer@0.5.1_class-validator@0.14.1/node_modules/src/http-error/HttpError.ts:16:18)\n' + 684: ' at new BadRequestError (/app/node_modules/.pnpm/routing-controllers@0.10.4_class-transformer@0.5.1_class-validator@0.14.1/node_modules/src/http-error/BadRequestError.ts:10:5)\n' + ... 690: method: 'POST', 691: url: '/v1/connectedAccounts', 692: body: { 693: integrationId: '3011084c-0c3e-4787-9949-8179675c1c5b', 694: userUuid: 'default', 695: redirectUri: undefined 696: }, 697: mediaType: 'application/json', 698: errors: { '404': '{\n "message": "Connector not found"\n}' } 699: } 700: } 701: Node.js v20.15.0 702: exec error: Error: Command failed: pnpm build && cd /home/runner/work/composio/composio/js/examples/e2e && pnpm start 703: /home/runner/work/composio/composio/js/lib/sdk/client/core/request.js:264 704: throw new ApiError_1.ApiError(options, result, error); 705: ^ 706: ApiError: Bad Request 707: at catchErrorCodes (/home/runner/work/composio/composio/js/lib/sdk/client/core/request.js:264:15) 708: at /home/runner/work/composio/composio/js/lib/sdk/client/core/request.js:309:45 709:  at process.processTicksAndRejections (node:internal/process/task_queues:95:5) { 710: url: '***/v1/connectedAccounts', 711: status: 400, 712: statusText: 'Bad Request', 713: body: { 714: message: 'Validation error. Please check your input.', 715: errors: [ ... 721: property: 'data', 722: children: [], 723: constraints: { 724: isObject: 'data must be an object', 725: isNotEmpty: 'data should not be empty' 726: } 727: } 728: ], 729: stack: 'Error: \n' + 730: ' at new HttpError (/app/node_modules/.pnpm/routing-controllers@0.10.4_class-transformer@0.5.1_class-validator@0.14.1/node_modules/src/http-error/HttpError.ts:16:18)\n' + 731: ' at new BadRequestError (/app/node_modules/.pnpm/routing-controllers@0.10.4_class-transformer@0.5.1_class-validator@0.14.1/node_modules/src/http-error/BadRequestError.ts:10:5)\n' + ... 737: method: 'POST', 738: url: '/v1/connectedAccounts', 739: body: { 740: integrationId: '3011084c-0c3e-4787-9949-8179675c1c5b', 741: userUuid: 'default', 742: redirectUri: undefined 743: }, 744: mediaType: 'application/json', 745: errors: { '404': '{\n "message": "Connector not found"\n}' } 746: } 747: } 748: Node.js v20.15.0 749: FRunning example: langchain 750: BadRequestError: 400 Invalid 'functions': empty array. Expected an array with minimum length 1, but got an empty array instead. 751: at APIError.generate (file:///home/runner/work/composio/composio/js/node_modules/.pnpm/openai@4.51.0/node_modules/openai/error.mjs:41:20) 752: at OpenAI.makeStatusError (file:///home/runner/work/composio/composio/js/node_modules/.pnpm/openai@4.51.0/node_modules/openai/core.mjs:268:25) ... 773: 'x-ratelimit-limit-tokens': '40000', 774: 'x-ratelimit-remaining-requests': '4999', 775: 'x-ratelimit-remaining-tokens': '39925', 776: 'x-ratelimit-reset-requests': '12ms', 777: 'x-ratelimit-reset-tokens': '112ms', 778: 'x-request-id': 'req_d4f4aec4d359971230a2b70aa337303d' 779: }, 780: request_id: 'req_d4f4aec4d359971230a2b70aa337303d', 781: error: { 782: message: "Invalid 'functions': empty array. Expected an array with minimum length 1, but got an empty array instead.", 783: type: 'invalid_request_error', 784: param: 'functions', 785: code: 'empty_array' 786: }, 787: code: 'empty_array', 788: param: 'functions', 789: type: 'invalid_request_error', 790: attemptNumber: 1, 791: retriesLeft: 6 792: } 793: stderr: undefined 794: stdout: undefined 795: exec error: Error: expect(received).toContain(expected) // indexOf 796: Matcher error: received value must not be null nor undefined 797: Received has value: undefined 798: FRunning example: openai 799: /home/runner/work/composio/composio/js/lib/sdk/index.js:105 800: if (app.yaml.no_auth) { 801: ^ 802: TypeError: Cannot read properties of undefined (reading 'no_auth') 803: at Entity.execute (/home/runner/work/composio/composio/js/lib/sdk/index.js:105:22) 804:  at process.processTicksAndRejections (node:internal/process/task_queues:95:5) 805: at async OpenAIToolSet.execute_tool_call (/home/runner/work/composio/composio/js/lib/frameworks/openai.js:55:31) 806: at async OpenAIToolSet.handle_tool_call (/home/runner/work/composio/composio/js/lib/frameworks/openai.js:61:30) 807: at async executeAgent (file:///home/runner/work/composio/composio/js/examples/openai/demo.mjs:41:5) 808: Node.js v20.15.0 809: exec error: Error: Command failed: pnpm build && cd /home/runner/work/composio/composio/js/examples/openai && pnpm start 810: /home/runner/work/composio/composio/js/lib/sdk/index.js:105 811: if (app.yaml.no_auth) { 812: ^ 813: TypeError: Cannot read properties of undefined (reading 'no_auth') 814: at Entity.execute (/home/runner/work/composio/composio/js/lib/sdk/index.js:105:22) 815:  at process.processTicksAndRejections (node:internal/process/task_queues:95:5) 816: at async OpenAIToolSet.execute_tool_call (/home/runner/work/composio/composio/js/lib/frameworks/openai.js:55:31) 817: at async OpenAIToolSet.handle_tool_call (/home/runner/work/composio/composio/js/lib/frameworks/openai.js:61:30) 818: at async executeAgent (file:///home/runner/work/composio/composio/js/examples/openai/demo.mjs:41:5) 819: Node.js v20.15.0 820: F 821: 1) tests/run-tests.spec.ts:13:9 β€Ί e2e ──────────────────────────────────────────────────────────── 822: Error: Command failed: pnpm build && cd /home/runner/work/composio/composio/js/examples/e2e && pnpm start 823: /home/runner/work/composio/composio/js/lib/sdk/client/core/request.js:264 824: throw new ApiError_1.ApiError(options, result, error); 825: ^ 826: ApiError: Bad Request 827: at lib/sdk/client/core/request.js:264 828: 262 | const error = errors[result.status]; 829: 263 | if (error) { 830: > 264 | throw new ApiError_1.ApiError(options, result, error); 831: | ^ 832: 265 | } 833: 266 | if (!result.ok) { 834: 267 | const errorStatus = (_a = result.status) !== null && _a !== void 0 ? _a : 'unknown'; 835: at catchErrorCodes (/home/runner/work/composio/composio/js/lib/sdk/client/core/request.js:264:15) 836: at /home/runner/work/composio/composio/js/lib/sdk/client/core/request.js:309:45 837:  at process.processTicksAndRejections (node:internal/process/task_queues:95:5) { 838: url: '***/v1/connectedAccounts', 839: status: 400, 840: statusText: 'Bad Request', 841: body: { 842: message: 'Validation error. Please check your input.', 843: errors: [ ... 849: property: 'data', 850: children: [], 851: constraints: { 852: isObject: 'data must be an object', 853: isNotEmpty: 'data should not be empty' 854: } 855: } 856: ], 857: stack: 'Error: \n' + 858: ' at new HttpError (/app/node_modules/.pnpm/routing-controllers@0.10.4_class-transformer@0.5.1_class-validator@0.14.1/node_modules/src/http-error/HttpError.ts:16:18)\n' + 859: ' at new BadRequestError (/app/node_modules/.pnpm/routing-controllers@0.10.4_class-transformer@0.5.1_class-validator@0.14.1/node_modules/src/http-error/BadRequestError.ts:10:5)\n' + ... 865: method: 'POST', 866: url: '/v1/connectedAccounts', 867: body: { 868: integrationId: '3011084c-0c3e-4787-9949-8179675c1c5b', 869: userUuid: 'default', 870: redirectUri: undefined 871: }, 872: mediaType: 'application/json', 873: errors: { '404': '{\n "message": "Connector not found"\n}' } 874: } 875: } 876: Node.js v20.15.0 877: at catchErrorCodes (/home/runner/work/composio/composio/js/lib/sdk/client/core/request.js:264:15) 878: at /home/runner/work/composio/composio/js/lib/sdk/client/core/request.js:309:45 879: at /home/runner/work/composio/composio/js/tests/run-tests.spec.ts:18:46 880: at /home/runner/work/composio/composio/js/tests/run-tests.spec.ts:14:13 881: 2) tests/run-tests.spec.ts:13:9 β€Ί langchain ────────────────────────────────────────────────────── 882: Error: expect(received).toContain(expected) // indexOf 883: Matcher error: received value must not be null nor undefined 884: Received has value: undefined 885: 21 | 886: 22 | // Assert some stuff on stdout for test checks 887: > 23 | expect(stdout).toContain('Expected output'); 888: | ^ 889: 24 | expect(stderr).toBe(''); 890: 25 | resolve(); 891: 26 | } catch (error) { 892: at /home/runner/work/composio/composio/js/tests/run-tests.spec.ts:23:26 893: at /home/runner/work/composio/composio/js/tests/run-tests.spec.ts:14:13 894: 3) tests/run-tests.spec.ts:13:9 β€Ί openai ───────────────────────────────────────────────────────── 895: Error: Command failed: pnpm build && cd /home/runner/work/composio/composio/js/examples/openai && pnpm start 896: /home/runner/work/composio/composio/js/lib/sdk/index.js:105 897: if (app.yaml.no_auth) { 898: ^ 899: TypeError: Cannot read properties of undefined (reading 'no_auth') ... 911: at async OpenAIToolSet.handle_tool_call (/home/runner/work/composio/composio/js/lib/frameworks/openai.js:61:30) 912: at async executeAgent (file:///home/runner/work/composio/composio/js/examples/openai/demo.mjs:41:5) 913: Node.js v20.15.0 914: at Entity.execute (/home/runner/work/composio/composio/js/lib/sdk/index.js:105:22) 915: at async OpenAIToolSet.execute_tool_call (/home/runner/work/composio/composio/js/lib/frameworks/openai.js:55:31) 916: at async OpenAIToolSet.handle_tool_call (/home/runner/work/composio/composio/js/lib/frameworks/openai.js:61:30) 917: at /home/runner/work/composio/composio/js/tests/run-tests.spec.ts:18:46 918: at /home/runner/work/composio/composio/js/tests/run-tests.spec.ts:14:13 919: 3 failed 920: tests/run-tests.spec.ts:13:9 β€Ί e2e ───────────────────────────────────────────────────────────── 921: tests/run-tests.spec.ts:13:9 β€Ί langchain ─────────────────────────────────────────────────────── 922: tests/run-tests.spec.ts:13:9 β€Ί openai ────────────────────────────────────────────────────────── 923: ELIFECYCLE  Test failed. See above for more details. 924: ##[error]Process completed with exit code 1. ```

    ✨ CI feedback usage guide:
    The CI feedback tool (`/checks)` automatically triggers when a PR has a failed check. The tool analyzes the failed checks and provides several feedbacks: - Failed stage - Failed test name - Failure summary - Relevant error logs In addition to being automatically triggered, the tool can also be invoked manually by commenting on a PR: ``` /checks "https://github.com/{repo_name}/actions/runs/{run_number}/job/{job_number}" ``` where `{repo_name}` is the name of the repository, `{run_number}` is the run number of the failed check, and `{job_number}` is the job number of the failed check. #### Configuration options - `enable_auto_checks_feedback` - if set to true, the tool will automatically provide feedback when a check is failed. Default is true. - `excluded_checks_list` - a list of checks to exclude from the feedback, for example: ["check1", "check2"]. Default is an empty list. - `enable_help_text` - if set to true, the tool will provide a help message with the feedback. Default is true. - `persistent_comment` - if set to true, the tool will overwrite a previous checks comment with the new feedback. Default is true. - `final_update_message` - if `persistent_comment` is true and updating a previous checks message, the tool will also create a new message: "Persistent checks updated to latest commit". Default is true. See more information about the `checks` tool in the [docs](https://pr-agent-docs.codium.ai/tools/ci_feedback/).
    utkarsh-dixit commented 3 months ago

    Pull Request Summary

    Changes and Objectives

    This PR includes several updates aimed at enhancing the JavaScript examples, testing workflow, and incorporating the COMPOSIO_API_KEY and COMPOSIO_BASE_URL in the project. Below are the key changes:

    1. Added Example Tests for JavaScript and Updated Existing Examples:

      • Created new tests and sample workflows in JavaScript.
      • Modified numerous example files to update their behavior and configurations.
    2. Updated GitHub Actions Workflow:

      • Added a JavaScript test workflow in the .github/workflows/common.yml for running JavaScript tests automatically.
    3. Modified SDK Configuration:

      • Updated the OpenAPI configuration and the Composio class to support COMPOSIO_API_KEY and COMPOSIO_BASE_URL environment variables.
    4. Included Playwright for Testing:

      • Added Playwright dependencies and configuration for end-to-end testing.
      • Created new test files with Playwright to run and validate examples and scripts.
    5. General Code Maintenance:

      • Removed unnecessary dependencies and code sections for better clarity and performance.

    Categorization

    Important Change Files

    This PR mainly affects multiple areas including JavaScript examples, workflows, and test configurations. The important files are: