Closed Irubataru closed 5 years ago
As it stands, this behaviour is at least partially intended. For now the Test Explorer has no way of knowing whether a given test has been modified. Without a clear indication of which tests have been changed between parsing runs, we err on the side of caution and invalidate all results.
This is not to say that this request is wrong, it's just that it's a lot more work than it might seem for now. The holy grail would be to invalidate only those tests where either the test itself changed or code that's covered by the test has changed.
I do see that but I personally would rather have the opposite functionality by default; that it expects that a test hasn't been invalidated, and that it is up to the user to rerun tests that should have been invalidated. This is obviously also not ideal. however when I am writing tests in most cases most of the tests haven't been touched between refreshes. If I have 500 tests and add 1 new one, having to rerun all of the 500 other ones is a much bigger detriment to me than it sometimes not invalidating one or two of those that have actually been touched without me realising. Regardless, I will run all of my tests before committing or releasing or whatever, so the fact that it might incorrectly flag some invalidated tests as passed while I am working on adding tests / changing the behaviour of a tiny part of my code doesn't really matter all that much to me.
As I see it there are a couple of levels of behaviour I would be fine with at increasing levels of complexity
To me 1 is sufficient to make me not cringe at the thought of adding new tests as incrementally adding tests is a lot of work when you have a lot of them, and it will only get worse.
I was thinking about how I go about this when I unit test C++ code, and what I do there is that I rerun my test suite with a filter. It might be that filtering tests is already there and I just haven't been looking at the test suite close enough. If you can run say all tests from one test module, that should be sufficient for me. I don't have rubberduck available now but I will have a look tomorrow and see if I can find a usable workflow 😄
In fact, on the test explorer, you can click on the Run
button which shows a dropdown of which tests to run, which may be selected tests or selected category plus few others.
Incidentally, I see that the extra options aren't surfaced on the Rubberduck's Unit Test menu. Considering that the Run
button also doesn't have the typical UI signals that it's a dropdown menu, that kinds of kills the discoverability of those extra options.
It would be really useful if one could group them based on module, because then I could minimise the modules I'm not working on, and only rerun the test module I am looking at. I might also get some mileage out of the category system as I can for example categorise tests that actually interact with the workbook, and just not run them most of the time as those are the only ones that take more than a fraction of a second to run. As a side note, how do you set a test category? I couldn't find it on the wiki.
I had another look and you can group tests by location, which means that workflow wise it should be alright. However, there doesn't seem to be any way to run all tests in a location. There is no way to select multiple tests for "Run -> Selected Test". I see that there is a "Selected Category" option, but that option is greyed out for me, possibly because I do not have any categories as I do not know how to set them. I can open a second feature request for either multi-select or "Run Selected grouping", either of which would mean that I could run all tests in the module I am currently working on.
Ok, I see there are a number of issues with the test explorer...
1) Selection seems to be single and doesn't allow for multiple selections. I thought that was addressed in a previous PR but it does not look like it's the case. That would be an enhancement.
2) Category needs to be documented/surfaced better. To set a category you need to update the annotations like thus:
'@TestMethod("MyCategory")
This unfortunately doesn't work for the @TestModule
annotation. That's another enhancement.
3) Though the category parameter categorizes the test methods just fine but unfortunately does not actually enable the Run Selected Category
option. That's a bug.
4) Then there's the enhancement to run tests by location. That's yet another enhancement.
In short, test explorer needs more UI loving. Of course, we welcome any new contributors who can make those small changes.
Ok, anyone who is interested in contributing can use those referenced issues to get started. @Irubataru if you have anything to add, feel free to tack on here so we can see how we can make it even better. 😄
Eventually I would be interested in looking at contributing to this project myself, but there is a bit of a barrier to entry starting with something like that. Maybe a difficult question to answer, but are there any resources you would recommend? I am probably nowhere near the level at which I can help as I have mostly been coding on Linux my entire life, and haven't done much with Windows at all. On top of that I haven't touched C# at all, but have been coding C++ for over 10 years now, so it shouldn't be that hard to pick up.
You guys are doing a great job, and without this tool my current job would have been a living nightmare, so I thank you all from the bottom of my heart and hope that I can eventually contribute myself.
Note that I made #4705 because it's my fault that this issue went on a tangent and turned into a freeforall rambling discussion; those issues split the individual erm, issue so they can be addressed accordingly. I apologize for the proliferation and gratuitous relabeling. 😆
@Irubataru I understand that several contributors here have done well with Jon Skeet's C# in Depth. There's also the chat room which you can join in to get additional help/guidance as part of contributing.
I have been using the latest version since PR #4769 was accepted and I just wanted to say that I am really happy with the change, it has made running my tests so much easier.
Specially now that I am going through a massive refactoring, and my unit tests being the only thing that really compiles on its own, it is very useful to be able to run all tests in a given module.
Thanks!
In the latest release (2.3.1.4308) a refresh will reset the status of all of my tests regardless of whether or not they have been affected. I remember this not being an issue with the previous version I used, although I can't remember which version that was. In general it makes writing tests and adding new tests a real chore as you have to either find the test you are working on in the menu and run that, or run every test for every change you make to either your code or your tests.
Ideally I want how it used to work as I remember that working "as expected".
To reproduce
Edit (added the version/host info)
Version 2.3.1.4308 OS: Microsoft Windows NT 10.0.17134.0, x64 Host Product: Microsoft Office x86 Host Version: 16.0.11029.20108 Host Executable: EXCEL.EXE