Don't think that we've got an issue on that, @jessicavers ?
I believe that for that work we need to create a simulated test data. We need to have a ground truth projection data and also a phantom. I'll create a separate issue on this #371. The main reason for that is that later we can also check the what the method actually does make sense.
The templates tests should probably do the following:
Each test automatically appends the loader which loads the synthetic dataset.
HTTomo executes the resulting pipeline.
Tests the result based on the type of the method using the initial input and ground truth datasets. For the majority of the methods that are not reconstruction algorithms, we need to use the synthetic projection data for testing. For reconstruction methods we can use the synthetic phantom instead.
Using some quantitative measures for the result of the test we can calculate the RMS error for the input compared to the output (it should be above a very small tolerance value, so that there has been a change to the input by the method). We cal also calculate RMSE of output compared to the ground truth data, it should be also non-zero value and larger than the value of RMSE for input and output. Then also the result shouldn't contain Nan's and Inf's. To conclude, here is a list of measures:
RMSE(input, output) > tolerance
RMSE(output, groundtruth) > tolerance AND RMSE(output, groundtruth) > RMSE(input, output)
Don't think that we've got an issue on that, @jessicavers ?
I believe that for that work we need to create a simulated test data. We need to have a ground truth projection data and also a phantom. I'll create a separate issue on this #371. The main reason for that is that later we can also check the what the method actually does make sense.
The templates tests should probably do the following: