Open tajmone opened 6 years ago
I don't mind if tests for edge cases are included, but they should be disabled (with a link to a "TODO" and a link to a bug). We shouldn't have failing tests tho. That means you can roll out a language you know works for most cases, and continue to fix things over time.
but they should be disabled
@marcoscaceres What about something like sample.skip.txt vs sample.expect.txt for cases like this? The tests would look fro both and if it found a skip it would just ignore it...
Unless you had a better idea for how they should be disabled?
I do see the value in tests for edge cases.
Sure, adding .skip. could work.
I like @yyyc514's idea, it would solve my initial request without affecting backward compatibility.
I think that adopting some meaningful suffixes to add more functionality to the test suite is a good idea, for it would allow adding more features without altering the default behavior.
Maybe .ignore.txt
renders more the idea of a test that should be executed but its result ignored — whereas .skip.txt
might be used for tests that should be left out from the test suite standard execution (unless some special invocation parameter is used). The former could still produce some useful output text, without affecting the exit code (kind of "error vs warning").
I'm all for a little flexibility here but I'm not sure what all Mocha allows regarding output during tests, reporting, etc.
I have a question about the Mocha tests, it's not clear to me if it's a condition that the commited tests in
test/markup/<language>/
must all pass, or if failing edge cases can be kept there as a remainder of the fixes that must be implemented for a syntax.For example, I noticed that the AsciiDoc syntax fails on many edge cases and advanced Asciidotor markup. So I was thinking of first creating all the tests with expected results, and then start to work on the syntax until all tests pass.
Ideally, even failing tests should be commited to
master
branch, because while working on the syntax the more tests are available the easier it is to see if and how new changes affect the expected results on all previous tests. In some cases, a small tweak to the syntax could fix many failed tests; in AsciiDoc this is quite likely, for example when introducing better support for verbatim and passthrough contents this might affect a whole range of elements currently being parsed incorrectly.Since complex syntaxes like AsciiDoc might require quite a long time to get all the edge cases straight, having all the expected tests results in the Mocha test folder (regardlessly wether they pass or fail) would be a good way to keep track of the work, as well as to offer a sort of roadmap of missing features for the syntaxes.
This way, any contributor could use Mocha to see what unresolved syntax problems are pending, and anyone who notices that a syntax doesn't behave as expected could add a test for the fail case, even if he/she isn't planning to fix it him/herself.
I didn't find much on this topic in the documentation, as it focuses mainly on how to carry out tests on one's own new syntax. It doesn't say, for example, what to do if a new syntax is in a usable stage although some edge cases tests still fail — should these tests be removed from the test folder, or be kept there for future improvements? Obviously, developing a syntax definition can often be a multi-step job, where the first usable syntax might not cover edge cases or advanced features, but be usable nevertheless.