CBielstein / APRSsharp

APRS# - Modern APRS software for the amateur radio community
MIT License
12 stars 5 forks source link

Net framework version change #21

Closed eddywine closed 4 years ago

eddywine commented 4 years ago

@CBielstein I made the necessary changes although I have 13 test out of the 107 tests failing. It is possibly because of the general problem from modification to the assertion from Fail to "Assert.True(false, "string");" which has been shown to give different true output from the expected output. Is it okay if I go ahead and create a helper method as discussed in https://stackoverflow.com/questions/14631923/cannot-find-assert-fail-and-assert-pass-or-equivalent or it is just an error in decoding the different data

CBielstein commented 4 years ago

@CBielstein I made the necessary changes although I have 13 test out of the 107 tests failing. It is possibly because of the general problem from modification to the assertion from Fail to "Assert.True(false, "string");" which has been shown to give different true output from the expected output. Is it okay if I go ahead and create a helper method as discussed in https://stackoverflow.com/questions/14631923/cannot-find-assert-fail-and-assert-pass-or-equivalent or it is just an error in decoding the different data

@eddywine These are "legitimate" test failures in that they were left that way intentionally during development. This is the outcome of me from 2016 using test driven development and not finishing the process. As I went through the APRS specification, I wrote tests to cover cases I discovered and then left them as failing until I implemented the correct fix (in a more mature project, this would be a bad thing to do and I would instead break the tests down even smaller for each PR). However, since I never finished the library, they remained failing. Basically, these tests are testing for unimplemented functionality, so they aren't really reliable tests.

So these tests don't really help us and we want to start protecting main from failing tests (which will become especially important when you add a CI build soon). But we also don't really want to delete the tests because they will come in handy in the near future and we don't want to forget about them as they are still things we need to fix.

We can do this by adding a Skip parameter to the [Fact] attribute on each of the failing tests. This allows you to specify a string as a reason and the test is reported as skipped instead of succeeded or failed. In addition to skipping the tests, we should open a GitHub issue to ensure it stays on our list to work on. So do the following:

  1. Open a new issue in our repository that says something like "Fix skipped tests from old repository" and put in the description that such tests are testing currently unimplemented features.
  2. Change the [Fact] tag on each of the failing tests to skip with a reason of the issue number and name that you created above something like [Fact(Skip = "### Fix skipped tests from old repository)] (where ### is whatever the issue number that GitHub assigns when you create the issue.)
  3. dotnet test should now show 13 tests skipped but none failing.
  4. Push that and open your PR.

We can/will come back for these tests sooner or later when we need the functionality they test.