Snaipe / Criterion

A cross-platform C and C++ unit testing framework for the 21st century
MIT License
2k stars 180 forks source link

Add C++11 interface #35

Closed Snaipe closed 9 years ago

Snaipe commented 9 years ago

A C++11-compatible interface should be doable with a bit of macro tricks.

This is not a critical feature, and can be dropped if the effort is not worth the results.

Snaipe commented 9 years ago

The interface seems to be working at the moment (c.f. branch features/cpp-compat-2), but there are some issues:

Snaipe commented 9 years ago

Theories are now C++11-compatible. There are no visible regressions on any platform, so it should be good to merge soon.

The only thing left to do is to make the C++ samples compile with -pedantic, which is for now deactivated since there are issues with variadic macro parameters expecting at least 1 argument where it seems that more than 1 argument is passed. The compiler is probably not wrong, so I'll need to investigate.

Snaipe commented 9 years ago

All warnings are fixed; merging right now.

am11 commented 9 years ago

:+1:

@Snaipe, (sorry for going little off-topic) I was reading the documentation, but I could not figure out this: how to execute Theory(..) and Test(..) written in a separate source file (say theories.c) from our main() runner?

Scenario:

We have a large set of fixtures for trans-compilation assets (Sass->CSS):

Another reason for us to switch to Criterion is to have integration tests for internal features of libsass C-API, such as memory management and source-maps (B64-VLQ encoded) etc. Currently we have very few tests for the lib itself, but they are not hooked with the CI. So we intend to use this framework to test the internal behaviours of libsass as well.

Snaipe commented 9 years ago

@am11 this is an interesting situation, since usually criterion provides the main() function for you; however the basics of providing your own main function is covered in here; the gist of it would be to first set the configuration options for Criterion, then call criterion_run_all_tests(). (see the default main for a more concrete idea).

However, now that I think about it, I should make it so the parameters and logic behind the default main are reusable easily; so providing your main would look something like:

int main(int argc, char *argv[]) {
    criterion_init_args(argc, argv);

    // your arg processing here

    return criterion_run_all_tests();
}

That should do it for initialization. Since you're building some kind of option struct yourselves, you might want it to be available to all your tests; criterion does not really support passing user data to tests through parameters in an elegant way, so you might want to make these options a global variable, or an inner static variable that you can later access from the scope of the test function.

Now for the contents of the test themselves, the obvious solution would be to add one Test(...) per asset to test, but this is going to be extremely painful because of the quantity of data you have; so instead, since the feature seems to be lacking, I'm going to refactor a bit of the criterion code and provide a function to dynamically add a test to the runner; that way, you can traverse your data and register your tests on the fly.

One other solution would be to use a Theory to iterate over the data set with something along those lines:

TheoryDataPoints(sass, specs) = {
    DataPoints(const char *, 
            "basic/00_empty", 
            "basic/01_simple_css",
            ...)
};

Theory((const char *directory), sass, specs) {
    // compile input.scss from directory and compare to expected outputs
}

All in all, it all boils down to the same question as Test(...): would you rather manually specify all tests in the source file and guarantee that the results of the tests are consistent accross commits, or automatically list and add all the tests by traversing the data directory (which is funny for its similarity to the "should I manually list sources or automatically list them in my build system" question) ?

am11 commented 9 years ago

@Snaipe, thanks for the informative reply and your kind offer to make the changes to entertain our scenario. Although, I will happily follow DIY guidelines, I would still like to copy @mgreter and @xzyfer for corrections and inputs, as they have much better -- 360 degrees view -- on this subject.

I was also thinking about using Theory construct and seed the fixtures via DataPoints as you described. I started playing around with it a bit yesterday in libsass repo itself, but stuck on the main method bit. Eventually, the test runner might land in sassc repository (as sassc is the executable around libsass), but not necessarily.

Ruby test runner at present is only dealing with stdin/stdout from sassc, but IMO it would be a good idea with Criterion, to tap into the API and make the calls programmatically. For that matter, if we want, we can bypass the sassc executable and use Criterion out of the box runner to test the barebones of library. Also, Ruby test runner at present does not test the error spec at present, which is the concern of highest priority for libsass development.

To replace our existing Ruby dependency, ideally we would want to iterate over all directories from a given root (starting point), then collect the input and outputs per spec. Once the seed assets are collected, it will spawn theory tests. On this note, we have a concept of "TODO" and "Closed" issues, which simply means for the test runner that if for example --todo argument was provided to via CLI, test TODO specs as well, else only test the closed ones.

Since the general purpose Theory stub would be agnostic of the type of the spec; whether it should expect the error or regular output, we will need some exception handling to identify it and then conditionally assert test the error message, roughly like this:

Theory(...) {
  try {
    compiler_result* result = sass.compile(compiler_options_with_input);
    assert(result->nested_css, output1);
    assert(result->expanded_css, output2);
    assert(result->minified_css, output3);
  } catch (sass_exception) {
    // We can probably use the assert which expect exception?
    assert(sass_exception->message, output1);
    assert(sass_exception->message, output2);
    assert(sass_exception->message, output3);
  }
}

The next thing Ruby test runner does is it generates and conveys the test coverage. I think Criterion has already aced in that area! :)

Once this main phase is over; "making Criterion driven test runner the drop-in replacement of existing Ruby one", we will move ahead and test other features of libsass through the API or even directly instantiating C++ objects from sources, such as the functionality of parser, AST, output emitters, source-map memory managers etc. (which are not tested when CI runs at present).

Snaipe commented 9 years ago

@am11 Also, if you want to stick to using a theory, but still want to generate the data points at runtime, you can use this workaround:

TheoryDataPoints(theory, gen) = {
    DataPoints(int, 42) // parameter placeholder. the 42 does not mean anything, it's just a dummy value.
};

static void generate_datapoints(void) {
    static int arr[] = {1, 2, 3, 4, 5};
    TheoryDataPoint(theory, gen)[0].len = 5;
    TheoryDataPoint(theory, gen)[0].arr = &arr;
}

Theory((int i), theory, gen, .init = generate_datapoints) {
    printf("%d\n", i);
}

so, adapting to the situation:

TheoryDataPoints(theory, gen) = {
    DataPoints(const char *, "") // parameter placeholder
};

// implement this to populate out and set *size to the size of the generated array
static list_directories(void **out, size_t *size);

static void generate_datapoints(void) {
    list_directories(&TheoryDataPoint(theory, gen)[0].arr, &TheoryDataPoint(theory, gen)[0].len);
}

static void free_datapoints(void) {
    // assuming list_directories mallocs the array
    free(TheoryDataPoint(theory, gen)[0].arr);
}

Theory((const char *directory), theory, gen, .init = generate_datapoints, .fini = free_datapoints) {
    // compile input in directory
}

(btw, the assert you're looking for in your sample is cr_assert_eq)

am11 commented 9 years ago

@Snaipe, thanks! I will implement it accordingly. :+1: