Open olegserov opened 6 years ago
I really like this idea, and I mentioned it to @nedbat as well. We could use the new dynamic contexts feature to implement this. Basically, the @covers decorator would register a test function name with a block of code that it intends to cover. That block of code can be translated to a range of line numbers. During reporting, if the dynamic context is set to the test function name, then we have a way to look up which line numbers should be counted and which should not.
Was going to open a issue for this exact thing! This would be super helpful! After using it in PHPUnit for years I really miss it and often resort to running tests singley to emulate. Additional documentation on the annonation.
In addition to specifying covered methods for a test, the ability to specify a covered class for a test class would be useful when method granularity isn't needed.
This is an interesting new idea. I guess you could use something like dynamic contexts to do this, even as a post-processing step (run coverage, collect the contexts, then read the @covers() information on tests, and delete line data that fell outside the covers() scopes).
I'm not sure how to limit the measurement at run time, but I haven't thought about it much yet.
I started putting together a proof-of-concept for this, just to see how it might be able to work. I created a @covers(...)
decorator that records covered line numbers in a separate file, and then a coverage plugin that uses that file to filter lines. Not sure if this is a good approach, but it at least shows it's possible.
https://github.com/mayoroftuesday/coverdecorator_poc
Any feedback would be greatly appreciated.
How do you annotate coverage of a method? Class.method
?
@nedbat any new thoughts on this?
@djlambert I haven't done any work on this, though as it happens, I'm on today's episode of Django Chat and we discussed this very idea. Can you point me to an example of a code base that uses a decorator like this? I wonder whether many developers would put in the work to annotate their test functions.
BTW, the PHPunit page has moved to here: https://phpunit.readthedocs.io/en/latest/code-coverage-analysis.html#specifying-covered-code-parts
(sorry for all the separate comments) I am interested to play around with how this could work, and what it would look like to users. IIUC, @mayoroftuesday's POC writes a separate data file, then uses that file during the reporting phase to limit the interpretation of the recorded data. I'm thinking we can instead limit what lines get recorded in the first place.
Also, what sort of argument should be allowed in the decorator? The POC needs it to be an importable function. I imagine a class or module would also be desirable, but I haven't used this feature before, so I'm not sure. @mayoroftuesday is there a reason you made the argument a string instead of the function itself?
It's been a long long time since I've worked in PHP. One of the packages I remember using it is https://github.com/doctrine/orm
I'd like to see the decorator take a class, method, or function.
Possible use case to use it for a method called very frequently by lots of code, but only want specific tests to count for it.
A global flag that would ignore coverage for any tests not using the decorator could be helpful too, so every test needs to specify its coverage scope.
Possible use case to use it for a method called very frequently by lots of code, but only want specific tests to count for it.
That's an interesting point: the decorator as currently described wouldn't be enough to handle that case. You'd also need a setting that said, "Don't count any coverage for frequently_called_function unless it's in an @covers("frequently_called_function") test."
A global flag that would ignore coverage for any tests not using the decorator could be helpful too, so every test needs to specify its coverage scope.
This is where I wonder if developers would really put in the work to decorate all of their tests. And if they did, would they be happy with the result? Do people really have complete enough test suites that every function is covered by tests specifically for that function?
I don't currently have test suites that complete, but if it was an option I would certainly try. I don't know that I'd go back to existing test suites and add it all over the place, but definitely with new projects. Once the decorator functionality is there, I wouldn't think it'd be much effort to say "if no decorator ignore" in the trace function?
Using the decorator multiple times on a test would also be useful.
Knowing exactly what a test is supposed to cover is the key to cutting down the number of tests to run during mutation testing. If this feature is widely used it can make mutation testing much more efficient.
It would help if the API is such that we can programmatically crawl the annotations statically so we can select the relevant tests for a given mutant.
I was thinking it could work similar to how the @patch(…) decorator works for Mock. So it would be a full path, like @covers('myproj.utils.foo')
On Thu, Oct 14, 2021 at 3:51 PM Anders Hovmöller @.***> wrote:
Knowing exactly what a test is supposed to cover is the key to cutting down the number of tests to run during mutation testing. If this feature is widely used it can make mutation testing much more efficient.
It would help if the API is such that we can programmatically crawl the annotations statically so we can select the relevant tests for a given mutant.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/nedbat/coveragepy/issues/696#issuecomment-943676476, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABOFGWSEPSH45IYNJ3FF5VDUG4YDLANCNFSM4FRN2JGQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
-- Michael Herring @.*** 434-466-4280 (m)
Facebook.com/MayorOfTuesday http://www.facebook.com/MayorOfTuesday http://www.linkedin.com/in/mmherring LinkedIn.com/in/mmherring http://www.linkedin.com/in/mmherring
@nedbat would still love to have this functionality! Any thoughts or plans to add it?
I haven't given this more thought. I would definitely like to see this concept explored as a separate proof-of-concept. Let me know if there's anything I can do to support something like that.
I've been thinking about this, but you know, life gets in the way.
I'm thinking decorators to specify coverage scopes:
@coverage.scope(MyClass)
def test_my_class():
sut = MyClass()
# ...
@coverage.scope('my_package._my_module.MyClass')
def test_my_class():
sut = MyClass()
# ...
@coverage.scope(MyClass, 'my_one_method')
@coverage.scope(MyClass, 'my_other_method')
def test_my_class():
sut = MyClass()
# ...
Would it be possible to add a flag to the Coverage API that makes it switch between
Now, I don't know about Coverage internals, so I don't know if coverage has a way to pick up this information.
Hi, I needed the same functionality for my current assignment and have made a proof-of-concept here: https://github.com/j11n/coverage_tools
I run the normal test/coverage process without any additions and afterwards I run the recalculation of the coverage which looks at the decorators to decide which contexts are allowed for certain source code line numbers.
bin/run_all.sh runs an example. Take a look and tell me what you think.
BR, /Jonny
In PHPunit there is @covers annotation to specify which classes/methods/functions are covered by a specific unit test. This helps avoiding false-coverage.
If a test has this annotation then any code that has been executed during that test and it is not explicitly specified in the "@covers" is going to be ignored. That way you can ensure that you actually have covered the code AND covered with relevant test cases.
I would really like to see this feature supported. I know, that this module does not do testing but it provides coverage feature. And if there is a way to dynamically exclude code from coverage then I can integrate that into testing.