Closed fniekiel closed 4 years ago
So my thoughts are:
I am fine with google drive. We should work on keeping the files small, though. This means I should not use full 2k diffraction images to test SER reading capabilities ...
I am unhappy with having the files kind of hidden in ncempy/test/resources. Maybe we should have a top level folder with test/example files in the openNCEM folder even, so they can also be used by matlab etc.
Lets collect as many combinations as possible. I guess this collection will grow with time, as more people start to use the code. Whenever something does not work, we probably get to add some more files.
My unit tests right now mainly test for proper input data to the algorithms rather than testing whether they run correctly yielding the right results. I add test cases, which run test data against known outcome. For the file formats I am still wondering about the best way. We probably should compare the read in data to some reference, which could be an emd version of the same data. But do we only compare the data or also the meta data? For me the testing usually was converting to an EMD file and than have a look at the data, meta data and the structure.
I added a DM3 file and a DM4 file to the Google Drive folder which both contain the same data and are 3D in size. The data was reported as 63,64,65 in size. Im not sure how that translates to contiguous data on disk. I also added calibration data so we can be sure to match the calibration data to the proper dimension. See the accompanying image file.
This is a very old issue. Im updating the tests to use pytest and will distribute small test data in the ncempy/data folder.
We need to overhaul the unit tests, to allow convenient collaboration of multiple coders.
Let's put up a list of requirements and current status, which we can use to discuss and work on these things.
Storage of binary data:
Structure of binary data:
Test data:
Implementation: