Open telegott opened 1 year ago
Hi there, thanks for the extensive explanation!
In the example there is a test declaration in the form of a function call spanning 61 lines. As someone not used to this framework, that is a lot harder to edit and understand than a short sequence of imperative programming statements. This means it would probably hamper learnability of tinytest
.
Moreover, one of the core design ideas of tinytest
is that a test script is just an R script that should be runnable with source
(or run_test_file
). So I feel this is out of scope for tinytest.
First of all, thanks for this great package. I greatly appreciate working with it. I have a suggestion for structuring tests, although I do not know how difficult it is to implement that in a bullet-proof way. It follows the Ruby test framework rspec and having worked with Ruby, Python and R quite a bit, I have the feeling this is the gold standard of how human readable tests could look like.
Basically it consists of three things:
describe
states the subject of your test, so the name of the function, e.g. if the function for connecting to a database is calledconnect
, you'd writedescribe('connect', {...}
context
states the data environment for the test, so you provide specific data to a function, you mock things, or you set an environment variable. A call would look like e.g.context('when a database url is specified in the environment, {...})
it
basically wraps one ore moretinytest::expect_*
statements. This is the innermost layer, and also just provides a descriptive text about what you want to happen, e.git('connects to the database', {...})
orit('raises error, {...})
At the most basic, this provides a great visual way to visually parse test files: what is tested, what setups there are, under which circumstances you expect what. This creates a logical structure that's easier to read than a flat file sequences of of
data = data.frame(...); expect_equal(...)
calls.context
might modify the environment, which should be undone afterwards.for example, I have a database connection function which depends on the environment variable
APP_ENV
(which is set to "test" when running the test). Based on that, a section of a list is used to provide parameters:In YAML:
For the development case, I have a fork: if there is an environment variable called
DATABASE_URL
, it would try that first, otherwise use default values, .e.gIn the
test
environment, it always uses the defaults, inproduction
, it always relies on aDATABASE_URL
being set.A naive implementation of the thing (using the
box
package)And this is how the test file would look like:
This leads to a full test output like
It would also be cool to snapshot the failed test before an
it
call and append[FAILED]
if the number if failed tests changes during the execution of the block. That makes it very easy to visually grasp what is going wrong.What do you think about an addition like that? There are many more features that
rspec
has, e.g. variables/calls that are lazy and can be defined for a wholecontext
for a base setup but that can be overwritten in single occasions, but this might be a first start if this is something you'd consider.