kondratyev-nv / vscode-python-test-adapter

Python Test Adapter for the VS Code Test Explorer
https://marketplace.visualstudio.com/items?itemName=LittleFoxTeam.vscode-python-test-adapter
MIT License
120 stars 27 forks source link

Add testplan support #256

Closed cnx-tcsikos closed 3 years ago

cnx-tcsikos commented 3 years ago

In my company, we are using Testplan as the testframework, and currently there is no extension for vscode that supports it.

If we come close to the approval of the PR, I'll update the necessary docs. Note that I'm not an experienced TS programmer, any reformatting suggestions are welcomed.

Caveats:

cnx-tcsikos commented 3 years ago

Thanks for working on this! My main concern is that there is a lot of code duplication and probably some edge cases (e.g would adding print('Hello world') to the beginning of a test file break test discovery?). But I believe most of these concerns can be fixed later if you feel like doing it.

I see that the tests are failing, do you need any help/suggestion on fixing those?

Also, have you tried running it locally? Do you mind attaching some screenshots on how it looks?

Testplan doesn't report file/line information.

This is ok to leave those empty, at least for now. This is mostly affecting UX - go to file/test feature won't be working.

I just checked, and yes, test discovery does breaks on print("Hello world"), that is concerning news.

There are some issues that I'm aware of:

Locally I have only tried on windows atm, I'm hoping that when windows tests are able to run on azure, the other platforms will follow.

Here is an example img of the plugin in action (srry for the white boxes, just to be on safe side on IP)

testplan-vscode The comes from parametrization annotations of testplan

kondratyev-nv commented 3 years ago

I just checked, and yes, test discovery does breaks on print("Hello world"), that is concerning news.

Are you planning to fix this?

There are some issues that I'm aware of:

  • testplan is not installed with azure
  • only install testplan on python3.6+ (any idea on this is welcomed)
  • only run testplan tests on python3.6+

There is no straightforward way for that as of now. If testplan can be installed with pip, you can add testplan in requirements.txt, and it seems that there is a way to filter this installation only for specific python version - https://stackoverflow.com/a/35614580/4182275 (I haven't tried it though).

Furthermore, another tricky part is to write tests that will support these conditions. I think you can add some environment variable with a python version and use this for a condition on whether to skip testplan tests (e.g. https://github.com/kondratyev-nv/vscode-python-test-adapter/blob/master/test/tests/testCancellation.test.ts#L44, but I think I saw that there is a way to do similar with just mocha).

cnx-tcsikos commented 3 years ago

Are you planning to fix this?

I've created a bug report for testplan. Its not that bad as I believed earlier, any print("Hello World") in the code happens before all the test printouts. So currently it will 'just' misinterpret the prints as test suites. On my part what I could do as of current is to

There is no straightforward way for that as of now. If testplan can be installed with pip, you can add testplan in requirements.txt, and it seems that there is a way to filter this installation only for specific python version - https://stackoverflow.com/a/35614580/4182275 (I haven't tried it though).

I made some modification to the azure-pipelines.yml to install testplan. However it does not seem to use that file, eg there is no "Install Python Dependencies" step in it at all. Is it possible that something else is used?

Furthermore, another tricky part is to write tests that will support these conditions. I think you can add some environment variable with a python version and use this for a condition on whether to skip testplan tests (e.g. https://github.com/kondratyev-nv/vscode-python-test-adapter/blob/master/test/tests/testCancellation.test.ts#L44, but I think I saw that there is a way to do similar with just mocha).

Thanks I'll look into that.

kondratyev-nv commented 3 years ago

I've created a bug report for testplan. Its not that bad as I believed earlier, any print("Hello World") in the code happens before all the test printouts. So currently it will 'just' misinterpret the prints as test suites. On my part what I could do as of current is to

  • search the test_plan.py for any ^print\("(.*)"\) to try and remove it from the list. This solution will only be a band-aid, as if there is more than a string literal, it won't work.

  • run the testplan test with a possibly unused testname like "sfkej3adsf4ts". This will also print the unwanted Hello World, but will also take up some time to run the non existing test.

Is it possible to import testplan test with some Python API? e.g. as unittest does not have plugin support, I've created a wrapper python script that fetches all tests and outputs discovery results in easy-to-consume format - https://github.com/kondratyev-nv/vscode-python-test-adapter/blob/master/src/unittest/unittestScripts.ts#L205-L208. These docs might help - https://testplan.readthedocs.io/en/latest/multitest.html#programmatic-listing, but I'm not sure if this will cover all possible test configurations. Or maybe specify some kind of a specific pattern to distinguish regular prints from discovered tests - https://testplan.readthedocs.io/en/latest/multitest.html#command-line-listing. Let me know if this helps.

Overall, I can approve a PR if you complete testplan support even with this issue and will try fixing it in some following PRs. Let's focus on general cases first 🙂

cnx-tcsikos commented 3 years ago

These docs might help - https://testplan.readthedocs.io/en/latest/multitest.html#programmatic-listing, but I'm not sure if this will cover all possible test configurations. Or maybe specify some kind of a specific pattern to distinguish regular prints from discovered tests - https://testplan.readthedocs.io/en/latest/multitest.html#command-line-listing. Let me know if this helps.

As I see it creates the some problem we want to bypass: "Not forcing the user to do (or don't do) stuff". With this we could have a printout with the special discovery_begins-discovery_ends tag, but only if the user implements it, because you have to annotate your testplan app with a special lister.

Overall, I can approve a PR if you complete testplan support even with this issue and will try fixing it in some following PRs. Let's focus on general cases first 🙂

Thanks!

I just realized that there is separately "azure pipelines" and "github action ci"

cnx-tcsikos commented 3 years ago

Is it possible to import testplan test with some Python API?

I'm not sure what you are asking. There is report package in testplan

kondratyev-nv commented 3 years ago

I'm not sure what you are asking. There is report package in testplan

The idea is doing similar to how test discovery is done for unittest (see https://github.com/kondratyev-nv/vscode-python-test-adapter/blob/master/src/unittest/unittestScripts.ts). Unittest outputs discovered tests to the stdout by default. However, there is an option to call unittest discovery programmatically which returns a set of objects that can be processed and outputted in a format that can be consumed by the extension (defaultTestLoader.discover(start_directory, pattern=pattern) in the script). So my question is whether it is possible to do something similar with testplan? Does it make sense?

When adding new dependencies to package.json, please, run npm install to update package-lock.json. Also, prefer using npm ci to restore dependencies - this will ensure reproducible behavior (npm install can fetch newer versions when available, but npm ci will use version locked in package-lock.json).

I don't know why Azure pipeline is not triggered. Let's focus on GitHub Actions as those seems to be triggered fine. Both doing the same job, so I don't expect any major issues.

cnx-tcsikos commented 3 years ago

I'm not sure what you are asking. There is report package in testplan

The idea is doing similar to how test discovery is done for unittest (see https://github.com/kondratyev-nv/vscode-python-test-adapter/blob/master/src/unittest/unittestScripts.ts). Unittest outputs discovered tests to the stdout by default. However, there is an option to call unittest discovery programmatically which returns a set of objects that can be processed and outputted in a format that can be consumed by the extension (defaultTestLoader.discover(start_directory, pattern=pattern) in the script). So my question is whether it is possible to do something similar with testplan? Does it make sense?

Ahh okay. No, I don't think there is a way to programatically get the tests. If there was, they would have mentioned in the issue. But heck, I'll ask them. (Edit: Its a no from them)

When adding new dependencies to package.json, please, run npm install to update package-lock.json. Also, prefer using npm ci to restore dependencies - this will ensure reproducible behavior (npm install can fetch newer versions when available, but npm ci will use version locked in package-lock.json).

Yeah, my bad :) I'm using SourceTree for git, and it showed that there was no significant changes. Usually it would mean that only EOL changes, but it was wrong. Corrected!

I don't know why Azure pipeline is not triggered. Let's focus on GitHub Actions as those seems to be triggered fine. Both doing the same job, so I don't expect any major issues.

Maybe because of the PR was not in "ready for review"? I did some shenanigans in the azure pipeline file blindly. A test run probably needed to check if the syntax is still right

cnx-tcsikos commented 3 years ago

@kondratyev-nv One test seems stuck and I can't see any logs what is it doing. Can the PR be merged?

cnx-tcsikos commented 3 years ago

Hmm I cannot replicate the errors in the CI. It seems that TestplanTestRunner::load returns undefined, which only happens when either testplan is not enabled or something banal went wrong. And testplan is enabled :D

kondratyev-nv commented 3 years ago

@cnx-tcsikos You can try to temporarily add ENABLE_TEST_LOGGING=1 env to the CI test step to enable additional output.

kondratyev-nv commented 3 years ago

It seems that cancellation tests are highly unstable. Maybe skip those for now?

cnx-tcsikos commented 3 years ago

It seems that cancellation tests are highly unstable. Maybe skip those for now?

Now the pytestScript::should discover tests fail on macOS...

kondratyev-nv commented 3 years ago

Thanks for working on this! I'll try it out myself and will publish a new version in a couple of days.