Open Evangelink opened 9 months ago
I'm not terribly keen on adding references to closed-source libraries. In general, I've taken a look at the license for the dependencies here added, and I'm not sure how I feel about it. I'm not a lawyer, and reading legalese (especially in a language that's not my native) is not what I do for a living, so I'm not outright rejecting it as of yet, but I will have to check with some other parts of the .NET and open source communities before I make up my mind on this.
I appreciate the effort taken to add support for this new testing platform. I'm just not sure I like the new platform yet though.
The platform itself is open source. Most of the extensions are currently closed source but free to use and redistribute, we hope to move them towards OSS (but it will take some time - for readers, feel free to open an issue on testfx asking for some extensions to be made OSS). Retry and Hot reload are on some kind of dual license (free for OSS and requiring some MS license - VS, DevKit, Azure or anything).
It's also possible to get rid of the bridge and to implement directly the new APIs which would bring more benefits (although cutting back compat).
All of that being said, I totally understand your view.
For the pros of the new platform, they are described in the links in the PR. There are also features of the platform (such as test nodes being a tree of tests) that can't be shown directly today as the Test Explorers are too limited.
Take your time to review and process the idea.
Also feel free to reject it.
Have a nice day
Hi @Alxandr ,
Just wanted to let you know that we are doing more progresses on this new testing platform and that NUnit just merged their PR and released a beta version of the new package.
I'd be happy to either resume work with you here or setup a call to give you more details if you want.
Hello. As of now I'm not interested in adding dependencies to packages that are not at-least source-available. And even that I imagine probably requires me to get rid of the FOSSA status from this project. Another thing I really dislike about these nuget packages is that they all have "source repository" links set to https://github.com/microsoft/testfx. As far as I can tell, this is just flat out false - and if so should really be remediated.
It's also possible to get rid of the bridge and to implement directly the new APIs which would bring more benefits (although cutting back compat).
Getting rid of backwards compatibility is definitely unfortunate but might be the way forward. Another option is just to make it a separate package I guess.
The third option, and the one I'm currently leaning towards the most (unless the extensions added here have their license updated) is having both implementation in one package. Given that this is doable with some sort of bridge - it should in theory be doable to just have an implementation for both runners in one assembly, no?
@Alxandr Thanks for your valuable input. I am happy to let you know that we got approval to OSS the parts required to move forward this PR. I'll be working on making it available for v1.3.0 of the platform (and extensions) and once available, I'll ping you here.
Ah - perfect. Looking forward to see what the new testing platform can provide :)
PR is merged: https://github.com/microsoft/testfx/pull/3133
and here is some technical documentation for the platform: https://github.com/microsoft/testfx/blob/main/docs/testingplatform/Index.md
Just a heads up - this week is pretty stuffed, so I might not be able to get to this before next week
No worries @Alxandr. Anyways we will need 1.3 preview to have the open-sourced version and full MIT but I wanted to already see CI issues so I can plan ahead.
FYI; I've not really started looking at this due to the build failing. I assumed that was something you'd fix before I started a proper review - but if not, that's obviously something that needs to be fixed.
@Alxandr I have updated the branch. dotnet build
, dotnet test
and dotnet pack
are working locally.
Ok so it is working locally because I had one build done already... I am getting some difficulties fixing the test because of both yolo sdk main and the fact that the dll is not built when props and targets are being referenced.
Internally we are dogfooding on previous built version so we are not facing this issue of using locally built reference.
I am trying to understand at what step to plug into for the copy.
Yeah - I started to look at it a bit myself. I was a bit confused, because I seemed to recall we used to build a nuget package and then run tests against that, but I might be mixing up repos in my head. I'll look around a bit and see if maybe a refactor of the build/test here is warranted. I'd really like to run tests on a sample project against the actual to-be-published nupkg, instead of manually referencing files cause I've had issues previously where the nupkg was missing things.
I have explored a few solutions and as far as I can see there are 2 paths we could take, both with pros and cons:
The test project doesn't reference the package directly but rather will have a set of tests that would create projects and add the nuget package reference at "runtime".
We git force add a fake/empty set of files in the artifacts folder so that MSBuild can resolve them, knowing that they will get overriden when we will execute the test project.
I would go with option 1 as it seems to be the cleanest one but would like your opinion before I implement any.
There are simpler solutions - just remove the test project from the sln and modify CI to create a nuget package, place it in a directory, and reference it from the test project. I'm pretty sure I do that in another project of mine that produces msbuild SDKs - but it's been a while since I looked at it. Just let this simmer til over the weekend and I will hopefull have time to get reacquainted with that code.
Just spent a few seconds looking, and there's this file: https://github.com/YoloDev/YoloDev.Sdk/blob/main/sample/NuGet.config - so I'm pretty sure I remember correctly. I think I might want to do the same solution here.
Done! Please let me know if you would like anything to be done differently
@Evangelink I've updated the repo to pass CI, however the tests only run when running with -p:EnableExpectoRunner=true
. That needs to be resolved, and I'm not entirely sure why this is the case as is.
however the tests only run when running with -p:EnableExpectoRunner=true.
It's strange, it seems to be ok for me locally when doing dotnet test
:
Oh sorry, did you meant that the legacy mode is not working?
Oh sorry, did you meant that the legacy mode is not working?
It should be fixed now, the properties in the fsproj
were hardcoded without taking into consideration CLI.
PS: Disregard the /tl:false
, I have .NET 9 preview installed, this is an upcoming feature :)
I think I'm going to add some (conditional) failing test as well, so that I'm sure the tests actually run before merging. It's very hard to distinguish no tests running from all tests passing.
You could redirect output to a file too and check it contains the expected footer
Just checking for a non-0 status code is probably good enough. Given that we also run green tests, that should cover most bases.
My plan is to do something like #if SHOULD_FAIL
and then run those tests with both legacy and new mode and check that they do indeed fail.
Let me know if you are handling this or need some help. Looking forward to have the PR merged and released!
If you would like to, feel free to add something like what I just described. Else I'll probably get around to it this weekend. Should be pretty simple as long as conditional compilation (#if
) is unproblematic in F# (I don't use the language these days). Just add a couple of new recipes to the justfile, and make the test
target depend on them. Easiest is probably to use a pwsh
shebang for best cross-platform support (example).
I added the ability to run some failing tests (first to just check that they actually run - I'm about to add a safety net to make sure they do automatically). One thing I noticed however is that I don't get any information about what tests that failed and how they failed with the new platform runner:
vs legacy:
Is this just a (current) limitation of the platform? Or is there something that should be done in the test adapter?
I will make a test but I think I just inverted the following 2 properties:
I will make a test but I think I just inverted the following 2 properties:
Just so we're not just waiting for eachother - I understood this as you're going to make changes? I'm not asking you to rush or anything, feel free to take things in your own time, just making sure you're not waiting for me.
Everything seems to be working - the output now also contains the details of what failed. If you are happy with the result I think we can probably merge this now.
@Alxandr we have just released v1.4 of the platform so if that's fine for you, I'll do the update. It might be worth for you to do some manual tests because we have introduced some live console test report.
@Alxandr we have just released v1.4 of the platform so if that's fine for you, I'll do the update. It might be worth for you to do some manual tests because we have introduced some live console test report.
What kind of "manual tests"? Just run the tests in the console? I do not have Visual Studio available on the computer I use to develop this (it's a linux machine).
What kind of "manual tests"? Just run the tests in the console?
Sorry! Yes this is a console live report feature! There were a couple of changes but this is the original PR https://github.com/microsoft/testfx/pull/3292.
I recommend adding a few slow tests so that you can properly see the console live update.
Als, hhere is the public changelog for the platform 1.4 https://github.com/microsoft/testfx/blob/main/docs/Changelog-Platform.md#140---2024-09-11
I have updated the first test to add a sleep of 5s, with that here are the outputs:
dotnet test
using VSTest
and terminal logger (net9):
dotnet test
using VSTest
regular output:
dotnet test
using Microsoft.Testing.Platform and terminal logger (net9):
dotnet test
using Microsoft.Testing.Platform regular output:
Calling the exe or doing dotnet
or dotnet exec
on the dll:
Thanks for all the work you've put into this :).
Well thank you for being open to the change and for keeping up with all these updates!
I feel much more confident with the new CI testing that this will not blow up peoples builds, and this will also make it easier for me to review PRs for this in the future.
Yes! Thanks for your involvment here.
--
We are going to release 1.4.3
soon with a couple of fixes linked to extensions, maybe you want to wait for this before merging. Otherwise, I'll do a follow-up PR with the bump only.
I'd like to add Expecto and your adapter to our new platform documentation page https://learn.microsoft.com/dotnet/core/testing/unit-testing-platform-intro?tabs=dotnetcli#supported-test-frameworks, would it be ok for you?
I think we'll leave the PR untill you know if there are/will be any guidelines for naming the properties, else I think we'll go with renaming it to EnableExpectoTestingPlatformIntegration
. I'm not to worried about future updates like 1.4.3
, but I don't want to release 1 set of property names just to rename in 2 weeks.
Relates to the following Twitter discussion https://x.com/rastreus/status/1757843921198329864?s=20
My team and I have been working on a new alternative to VSTest testing platform, called Microsoft Testing Platform (that powers MSTest runner). Blogpost announcement https://devblogs.microsoft.com/dotnet/introducing-ms-test-runner/
Most of the information available at this page are related to the platform more than MSTest: https://learn.microsoft.com/dotnet/core/testing/unit-testing-mstest-runner-intro?tabs=dotnetcli
Calling the exe directly:
With support of new platform through
dotnet test
:Disabling support of runner in
dotnet test
: