Closed eleanorjboyd closed 1 month ago
Hello, I would love help out to have access to coverage data.
In fact I just forked the repo this week and started to dig in the extension testing structure to see if it would be possible to have automated running of tests on modifications similarly to wallaby.js but in python for a better TDD experience.
A requirement for a smooth experience would be to only re-run affected tests by real-time edits and thus to know for each modified statement what is the list of tests that pass it, and thus coverage data.
I would be happy to contribute if I can be of any use.
Hi @MathisFederico, thank you for your offer! Automatic run on modification isn't something we have planned so this is always something you could contribute once coverage is finished.
I am planning on outlining the design for coverage when I work on this next in 2 weeks so your input on that would be helpful. Additionally I am always looking for test cases to use especially those that cover common scenarios and edge cases. Ill keep this thread in the loop, thanks
Hi @MathisFederico, thank you for your offer! Automatic run on modification isn't something we have planned so this is always something you could contribute once coverage is finished.
Being a heavy test user, I'd be happy to contribute ! @eleanorjboyd
That was the plan anyway but coverage data is a requirement, so I might as well help on this first. (It will get me started on the codebase practice anyway)
I am planning on outlining the design for coverage when I work on this next in 2 weeks so your input on that would be helpful.
Anytime ! Feel free to ping me whenever. I did start to look into it myself and it was difficult to get all the interesting data (each test context for each line covered) except by parsing the html output from coverage.py
I'm curious to see what you have in mind for coverage anyway and how you picture displaying it to user.
Additionally I am always looking for test cases to use especially those that cover common scenarios and edge cases. Ill keep this thread in the loop, thanks
I have done quite a lot of tests through pytest on my different open-source projects, dummy versions of those could cover a diverse set of pytest cases.
You'll tell me if you want me to dig into those. I'd be happy to directly add them myself into a PR if I'm shown a basic example.
Hi @eleanorjboyd, any news on the coverage data API ?
I'm extremely excited about seeing Python language support for integrated code coverage visibility directly in the IDE.
@kieferrm and @eleanorjboyd - Is this the correct issue to watch for updates on adding Python-language support for the new VS Code Code Coverage API, and is there any specific timeline you have in mind now that this API has launched?
If not immediately on the horizon within this extension, does anyone know if there are plans (in the community perhaps) for something like a bespoke Python Codecoverage extension?
@eleanorjboyd, @kieferrm - Circling back here with 2 questions.
If test coverage information is available, GitHub Copilot can offer a CodeLens Generate tests using Copilot for functions and methods that are not yet covered by tests.
Thanks so much!
hi @aaronsteers! This feature is in the vscode-python extension pre-release version and will be out on the next release of the stable version! Would love if you could give it a try when it comes out and let me know how it goes! For the coverage-driven unit tests generation feature- @connor4312 do you have insight into if this will work out of the box or require additional setup?
@eleanorjboyd - Exciting!! 🚀
I'll give it a try for sure - thanks!
@eleanorjboyd - Trying to give this a spin with the insiders build...
If I switch to insiders, I initially appear to get version number v2024.17.2024100202
, which I believe is correct. But then my extension then seems to auto-update (alphanumerically?) to v2024.9.11721010
, which is actually a downgrade. I will keep trying, but wanted to let you know of this issue I observed in case it is helpful.
UPDATE: I succeeded in getting version v2024.15.2024091301
when installing to VS Code Insiders. I was previously attempting from the stable version of VS Code.
I get an error message if I try to run "Test with Coverage" from the VS Code Insiders build:
run_pytest_script.py: error: unrecognized arguments: --cov=.
Looks like --cov
is not declared in run_pytest_script.py
.
Running pytest with args: ['-p', 'vscode_pytest', '--rootdir=/Users/ajsteers/Source/PyAirbyte', '--cov=.', '/Users/ajsteers/Source/PyAirbyte/tests/unit_tests/test_caches.py::test_get_sql_alchemy_url']
ERROR: usage: run_pytest_script.py [options] [file_or_dir] [file_or_dir] [...]
run_pytest_script.py: error: unrecognized arguments: --cov=.
inifile: /Users/ajsteers/Source/PyAirbyte/pyproject.toml
rootdir: /Users/ajsteers/Source/PyAirbyte
Hi! You need to install pytest-cov in your environment for it to work! We are finishing up the docs that explain so Ill send those by once they are published. Thanks!
Feels like Christmas! Thanks, team!
❤️❤️❤️ https://code.visualstudio.com/updates/v1_94#_python ❤️❤️❤️
A new API has been created to allow for test coverage to be reported to the user. Implement this for python and also provide feedback on the API as I go.