Open bryankim96 opened 6 years ago
As a first step, the Travis CI, Coveralls, and Landscape badges in the Readme seem to have been broken by moving the repo. I'm not sure why, maybe the owner needs to be changed to ctlearn-project? @bryankim96 could you look into this?
The badges should now be working. However, I have noticed it often takes quite some time for the initial check to be completed. So, it could be a little while before some of the badges begin showing up-to-date information.
Travis CI is now working, but I could not get Coveralls or Landscape to sync properly. I've removed them for now. I also tried installing Codecov to replace Coveralls but have removed it for now too since this is better handled after the v0.2.0 release. While it looks like Landscape provides useful and interesting code quality feedback, I found it extremely buggy and poorly documented.
Some ideas for tests of ImageMapper
def test_image_mapping_one_channel():
img_mapper = ImageMapper()
for tel in img_mapper.num_pixels:
num_pixels = img_mapper.num_pixels[tel] + 1
in_ = np.random.rand(num_pixels, 1)
out = img_mapper.map_image(in_, tel)
assert out.shape[0:2] == img_mapper.image_shapes[tel]
assert out.shape[2] == 1
def test_image_mapping_two_channels():
img_mapper = ImageMapper()
for tel in img_mapper.num_pixels:
num_pixels = img_mapper.num_pixels[tel] + 1
in_ = np.random.rand(num_pixels,2)
out = img_mapper.map_image(in_, tel)
assert out.shape[0:2] == img_mapper.image_shapes[tel]
assert out.shape[2] == 2
Is this what you guys are thinking of?
This sounds good. I can make relevant changes in the rest of this piece of code. Relevant config should also be passed and read.
Although other priorities (reorganizing the code, implementing new functionality, and making the project accessible and available for outside use) have been our primary focus so far, as the codebase becomes larger and we consider the possibility of outside contributions it seems like a good idea to begin looking seriously at implementing tests for maintaining and monitoring the code quality.
The project is relatively small and there is no need for the sort of serious testing that is used in more complicated software. However, with several interacting components, it seems like a few simple tests may be appropriate and helpful. Testing portions of the code in isolation should make it easier to recognize bugs and easier to test new changes without having to run the full training pipeline and search for errors by hand. Tests may also be helpful as a way of evaluating pull requests and preventing code regression/breaking changes.
The framework for testing and continuous integration with pytest and TravisCI is already in place, all that remains to be written are the tests themselves. The overall approach was based on the implementation of tests in ctapipe, which seems like it may be a helpful example.
Maybe we can discuss below what (if any) tests might be appropriate/helpful and who would be willing to write them.