bdzyubak / tensorflow-sandbox

A repository for studying applications of Deep Learning across fields, and demonstrating samples of my code and project managment
0 stars 0 forks source link

Make end-to-end test for data and model initializers #24

Open bdzyubak opened 2 years ago

bdzyubak commented 2 years ago

The shared_utils/ prep_training_data.py and model_initializer.py are crucial modules. They are intended to set up defaults and control parameter changes for all supported data input formats and models, respectively. So, a test is required to verify that support is maintained after changes to these modules, or merge to beta.

Currently, the following training scripts are functional. Each applies to a different data format. \KaggleHistopathologyDetection\train_compare_builtin_class_models.py \Cell-Nuclei-Segmentation\train_generic.py \COVIDLungSegmentation\train_unet_lung_segmentation.py

The train_compare script runs all supported models. So, making a function in each module to support a 1 epoch test run, and a test evaluator to verify that recent trained models were saved would satisfy the test requirement.