Closed hadley closed 5 years ago
I think this leads to a natural split when testing errors — you have many tests where you use class
argument to verify that the correct error is thrown, and then you have a few tests with expect_known_result()
where you check that the error generates a message that's useful to humans.
cc @jennybc
Maybe we should have a top-level equivalent to test_that()
that ran its contents and saved to disk.
Then the test above would be become:
regress_that("df.txt", {
vec_type(mtcars)
vec_type(iris)
})
This takes us the full circle back to the regression tests that base R uses extensively
It is nice to have a single test_that()
, but I agree that the benefit of having multiple expectations within a single knitr run outweighs that.
As an alternative, maybe expect_known_output
could create multiple expectations? I think that's possible, just call expect()
multiple times. Not sure if it is better. Multiple expectations within a single expect_*
seems confusing. But for the end user it might not matter much.
TBH I am not 100% sure that we should base everything on printing. E.g. even in the simple case of having a tibble, only the first n rows will be printed, but we want to detect the differences in all rows.
Maybe what we need is some special printing? Or a way to compare the full object to the reference, and then have a way to explore the differences if a test fails?
I think printing gets you 90% of the way there, and you can always customise the printing.
The disadvantage of basically any other approach is that you lose nice diffs.
Yeah, but you only need the diffs if a test case fails. So I think I would 1) save and compare the full objects 2) if a test fails, print (by default) for the diffs
To make it more flexible, we could have a test_diff()
or print_diff()
generic, to be able to customize what is printed and what is diffed.
That behaviour is still just a special case of running arbitrary code in an Rmd 😄
Yeah, I am not arguing against the Rmds. Only against comparing the printouts instead of comparing the objects.
Needs option to set override crayon
behaviour
I've noticed that I've started writing output regression tests like this:
This is starting to feel like a poor man's knitr, which made me realise we need
expect_known_something()
that uses evaluate to show both the input code and the output results.This would subsume #681 (assuming we added crayon options) and #768.