This PR adds two new data subsets for train_patch.py, airbus-homogeneous-8 and airbus-homogeneous-24 and data folders and patch training configs for each.
Bumped learning rate from default 0.03 to 0.08 for airbus-homogeneous-8 things up a bit. Batch size for airbus-homogeneous-24 is triple this at 0.24, to reap the benefits of training in parallel.
This PR adds two new data subsets for
train_patch.py
,airbus-homogeneous-8
andairbus-homogeneous-24
and data folders and patch training configs for each.Bumped learning rate from default 0.03 to 0.08 for
airbus-homogeneous-8
things up a bit. Batch size forairbus-homogeneous-24
is triple this at 0.24, to reap the benefits of training in parallel.This is an experiment to address issue #2