Closed tarpas closed 6 years ago
As @blueyed noted, a reproducible test case would be the best. If we don't have that there is a couple of paths how to advance, but they are not quick. I'll try to comment here when I have the ideas formulated.
I don't think "conservative mode" is a way to go. We just need to fix the one (maybe 2) bugs.
What could be of value would be a functionality of "force-run" which would run selected tests and write the new coverage data.
I don't think "conservative mode" is a way to go. We just need to fix the one (maybe 2) bugs.
:+1:
What could be of value would be a functionality of "force-run" which would run selected tests and write the new coverage data.
Isn't that what --tlf
should do in the first place?
But I guess what you mean to work around this is something like --testmon
, but without deselecting anything.
In general I think some more reporting on what gets deselected why could be very useful. Also maybe some dry-run mode where it would only report what would get run and what it knows about.
In general I think some more reporting on what gets deselected why could be very useful. Also maybe some dry-run mode where it would only report what would get run and what it knows about.
👍
What could be of value would be a functionality of "force-run" which would run selected tests and write the new coverage data.
This would be a refined & better version of what I do now: git clean -xfd
to clear caches + re-run. And it could also find any mistakes it's made by comparing its prior to the output (and maybe even compile a bug report!)
a reproducible test case would be the best. If we don't have that there is a couple of paths how to advance, but they are not quick. I'll try to comment here when I have the ideas formulated.
I'll try and spot some pattern for what could be causing the issues. Without being able to inspect what's going on (e.g. look at the DB), it's difficult to build a repro case from a bunch of failures in the project...
Isn't that what --tlf should do in the first place?
No, --tlf is a testmon equivalent of --lf from the pytest itself.
--lf, --last-failed - to only re-run the failures.
So it reruns the failing tests, regardless if they are affected or not. (e.g. for debugging purposes - which is questionable because debuggers and testmon clash)
@max-imlian Are your using debugger and testmon at the same time? That might explain, your problems.
Are your using debugger and testmon at the same time?
Yes!
@max-imlian That is not working, sorry.
See #97 (duplicate)
Thanks @tarpas
To be clear, this includes the --pdb
option?
--pdb runs pdb on exception? Then I assume pdb calls settrace and messes things up, yes.
On Wed, Aug 1, 2018 at 5:37 PM, Maximilian Roos notifications@github.com wrote:
Thanks @tarpas https://github.com/tarpas
To be clear, this includes the --pdb option?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tarpas/pytest-testmon/issues/114#issuecomment-409618738, or mute the thread https://github.com/notifications/unsubscribe-auth/AAIQsBe5pUFanL6M9pklSDSitNh7Eqmaks5uMcsvgaJpZM4VnKtR .
OK, thanks, that explains a lot
from @max-imlian https://github.com/tarpas/pytest-testmon/issues/90#issuecomment-408984214 FYI I'm still having real issues with testmon, where it doesn't run tests despite both changes in the code and existing failures, even when I pass --tlf.
I love the goals of testmon, and it's performs so well in 90% of cases that it's become an essential part of my workflow. As a result, when it ignores tests that have changed, it's frustrating. I've found that --tlf often doesn't work, which is a shame as it was often a 'last resort' to testmon making a mistake in ignoring too many tests.
Is there any info I can supply that would help debug this? I'm happy to post anything.
Would there be any use of a 'Conservative' mode, where testmon would lean towards testing too much? A Type I error is far less costly than a Type II.